976 resultados para ANPSP control model
Resumo:
O objetivo desta pesquisa foi analisar o desenho institucional do controle externo sobre os contratos de gestão no âmbito do Tribunal de Contas do estado de Pernambuco quanto a sua aderência aos conteúdos da lei estadual que disciplina as Organizações Sociais e quanto a sua observância por parte dos atores envolvidos: Administração Pública, técnicos do tribunal de contas e membros do seu corpo julgador. Foram assumidas as seguintes premissas: que os novos arranjos de prestação de serviços públicos, por meio de parcerias com as Organizações Sociais, demandam por parte dos Tribunais de Contas desenhos institucionais de fiscalização específicos, que a pesar de variáveis devem primar por sua capacidade de revelar informações; que o processo de formatação destes desenhos institucionais deve ser dinâmico, permitindo-se que as contigências experimentadas na sua implementação possam contribuir no seu aperfeiçoamento; e que esses desenhos institucionais geram impacto no comportamento dos atores envolvidos. O estudo foi realizado por meio de pesquisa documental. A metodologia qualitativa de análise de conteúdo foi escolhida para análise dos dados. Os resultados da pesquisa permitiram concluir que o desenho institucional de controle dos contratos de gestão no âmbito do TCE-PE caracteriza-se por sua fragilidade como mecanismo de revelação de informação e, consequentemente, não contribui para a redução da assimetria de informação que se estabelece com a implementação dos contratos de gestão. Adicionalmente, compromete e limita o desempenho do Tribunal de Contas no controle destes ajustes. Verificou-se, também, uma a baixa observância do desenho institucional identificado, em que pese sua fragilidade, por parte dos atores envolvidos no controle dos contratos de gestão, implicando em uma baixa institucionalização deste desenho. Os resultados devem proporcionar uma rediscussão acerca dos mecanismos de controle dos contratos de gestão por parte do TCE-PE, que poderá resultar em um novo desenho institucional com vistas a conferir maior transparência às parcerias com as Organizações Sociais.
Resumo:
Due of industrial informatics several attempts have been done to develop notations and semantics, which are used for classifying and describing different kind of system behavior, particularly in the modeling phase. Such attempts provide the infrastructure to resolve some real problems of engineering and construct practical systems that aim at, mainly, to increase the productivity, quality, and security of the process. Despite the many studies that have attempted to develop friendly methods for industrial controller programming, they are still programmed by conventional trial-and-error methods and, in practice, there is little written documentation on these systems. The ideal solution would be to use a computational environment that allows industrial engineers to implement the system using high-level language and that follows international standards. Accordingly, this work proposes a methodology for plant and control modelling of the discrete event systems that include sequential, parallel and timed operations, using a formalism based on Statecharts, denominated Basic Statechart (BSC). The methodology also permits automatic procedures to validate and implement these systems. To validate our methodology, we presented two case studies with typical examples of the manufacturing sector. The first example shows a sequential control for a tagged machine, which is used to illustrated dependences between the devices of the plant. In the second example, we discuss more than one strategy for controlling a manufacturing cell. The model with no control has 72 states (distinct configurations) and, the model with sequential control generated 20 different states, but they only act in 8 distinct configurations. The model with parallel control generated 210 different states, but these 210 configurations act only in 26 distinct configurations, therefore, one strategy control less restrictive than previous. Lastly, we presented one example for highlight the modular characteristic of our methodology, which it is very important to maintenance of applications. In this example, the sensors for identifying pieces in the plant were removed. So, changes in the control model are needed to transmit the information of the input buffer sensor to the others positions of the cell
Resumo:
In this work, we propose methodologies and computer tools to insert robots in cultural environments. The basic idea is to have a robot in a real context (a cultural space) that can represent an user connected to the system through Internet (visitor avatar in the real space) and that the robot also have its representation in a Mixed Reality space (robot avatar in the virtual space). In this way, robot and avatar are not simply real and virtual objects. They play a more important role in the scenery, interfering in the process and taking decisions. In order to have this service running, we developed a module composed by a robot, communication tools and ways to provide integration of these with the virtual environment. As welI we implemented a set of behaviors with the purpose of controlling the robot in the real space. We studied available software and hardware tools for the robotics platform used in the experiments, as welI we developed test routines to determine their potentialities. Finally, we studied the behavior-based control model, we planned and implemented alI the necessary behaviors for the robot integration to the real and virtual cultural spaces. Several experiments were conducted, in order to validate the developed methodologies and tools
Resumo:
Os transtornos mentais comuns (TMC) apresentam elevada prevalência em populações gerais e de trabalhadores, com consequências individuais e sociais importantes. Este estudo, transversal e descritivo, explora a relação entre demandas psicológicas, grau de controle e presença de suporte social no trabalho e prevalência de TMC em trabalhadores da rede básica de saúde de Botucatu (SP). A coleta de dados foi feita por meio de questionário autoaplicável, não identificado, com destaque para itens relativos à demanda-controle-suporte e presença de TMC (Self Reporting Questionnaire, SRQ-20). As informações foram inseridas em banco de dados construído com Excel/Office XP 2003 e a análise estatística, efetuada com o programa SAS. Constatou-se que 42,6% dos trabalhadores apresentavam TMC. A observação de associação - alta prevalência de TMC com elevado desgaste (classificação de Karasek) e baixa prevalência de TMC com baixo desgaste - indica que, no município estudado, as condições de trabalho na atenção básica constituem fator contributivo não negligenciável ao adoecimento dos trabalhadores. Revela-se a necessidade de intervenções direcionadas ao cuidado aos trabalhadores, melhoria das condições de trabalho e aumento do suporte social no trabalho.
Resumo:
Pós-graduação em Engenharia Elétrica - FEIS
Resumo:
The spread of infectious disease among and between wild and domesticated animals has become a major problem worldwide. Upon analyzing the dynamics of wildlife growth and infection when the diseased animals cannot be identified separately from healthy wildlife prior to the kill, we find that harvest-based strategies alone have no impact on disease transmission. Other controls that directly influence disease transmission and/or mortality are required. Next, we analyze the socially optimal management of infectious wildlife. The model is applied to the problem of bovine tuberculosis among Michigan white-tailed deer, with non-selective harvests and supplemental feeding being the control variables. Using a two-state linear control model, we find a two-dimensional singular path is optimal (as opposed to a more conventional bang-bang solution) as part of a cycle that results in the disease remaining endemic in the wildlife. This result follows from non-selective harvesting and intermittent wildlife productivity gains from supplemental feeding.
Resumo:
Today, health problems are likely to have a complex and multifactorial etiology, whereby psychosocial factors interact with behaviour and bodily responses. Women generally report more health problems than men. The present thesis concerns the development of women’s health from a subjective and objective perspective, as related to psychosocial living conditions and physiological stress responses. Both cross-sectional and longitudinal studies were carried out on a representative sample of women. Data analysis was based on a holistic person-oriented approach as well as a variable approach. In Study I, the women’s self-reported symptoms and diseases as well as self-rated general health status were compared to physician-rated health problems and ratings of the general health of the women, based on medical examinations. The findings showed that physicians rated twice as many women as having poor health compared to the ratings of the women themselves. Moreover, the symptom ”a sense of powerlessness” had the highest predictive power for self-rated general health. Study II investigated individual and structural stability in symptom profiles between adolescence and middle-age as related to pubertal timing. There was individual stability in symptom reporting for nearly thirty years, although the effect of pubertal timing on symptom reporting did not extend into middle-age. Study III explored the longitudinal and current influence of socioeconomic and psychosocial factors on women’s self-reported health. Contemporary factors such as job strain, low income, financial worries, and double exposure in terms of high job strain and heavy domestic responsibilities increased the risk for poor self-reported health in middle-aged women. In Study IV, the association between self-reported symptoms and physiological stress responses was investigated. Results revealed that higher levels of medically unexplained symptoms were related to higher levels of cortisol, cholesterol, and heart rate. The empirical findings are discussed in relation to existing models of stress and health, such as the demand-control model, the allostatic load model, the biopsychosocial model, and the multiple role hypothesis. It was concluded that women’s health problems could be reduced if their overall life circumstances were improved. The practical implications of this might include a redesign of the labour market giving women more influence and control over their lives, both at and away from work.
Resumo:
Good work quality is crucial for employee well-being and health. Indicators of work quality are, among others, aspects of one’s work organization and learning opportunities. Based on the Job-Demands Control model we investigate if a) young employees are confronted with different combinations of job characteristics, b) cluster membership is predicted through socio-demographic and educational factors as well as positive self-evaluations and health, and c) cluster membership leads to different associations with job-related and general well-being. Based on TREE (Transition from Education to Employment) data we found three clusters of job characteristics, i.e. high resources – low demands, medium resources – medium demands, and low resources – high demands. Likelihood to be in a more favourable group was higher for females and young employees who reported more positive self-evaluations and higher learning efforts after compulsory school. Young employees in more favourable groups also reported higher levels of job-related and general well-being.
Resumo:
Concurrency in Logic Programming has received much attention in the past. One problem with many proposals, when applied to Prolog, is that they involve large modifications to the standard implementations, and/or the communication and synchronization facilities provided do not fit as naturally within the language model as we feel is possible. In this paper we propose a new mechanism for implementing synchronization and communication for concurrency, based on atomic accesses to designated facts in the (shared) datábase. We argüe that this model is comparatively easy to implement and harmonizes better than previous proposals within the Prolog control model and standard set of built-ins. We show how in the proposed model it is easy to express classical concurrency algorithms and to subsume other mechanisms such as Linda, variable-based communication, or classical parallelism-oriented primitives. We also report on an implementation of the model and provide performance and resource consumption data.
Resumo:
Esta tesis realiza una contribución metodológica al problema de la gestión óptima de embalses hidroeléctricos durante eventos de avenidas, considerando un enfoque estocástico y multiobjetivo. Para ello se propone una metodología de evaluación de estrategias de laminación en un contexto probabilístico y multiobjetivo. Además se desarrolla un entorno dinámico de laminación en tiempo real con pronósticos que combina un modelo de optimización y algoritmos de simulación. Estas herramientas asisten a los gestores de las presas en la toma de decisión respecto de cuál es la operación más adecuada del embalse. Luego de una detallada revisión de la bibliografía, se observó que los trabajos en el ámbito de la gestión óptima de embalses en avenidas utilizan, en general, un número reducido de series de caudales o hidrogramas para caracterizar los posibles escenarios. Limitando el funcionamiento satisfactorio de un modelo determinado a situaciones hidrológicas similares. Por otra parte, la mayoría de estudios disponibles en este ámbito abordan el problema de la laminación en embalses multipropósito durante la temporada de avenidas, con varios meses de duración. Estas características difieren de la realidad de la gestión de embalses en España. Con los avances computacionales en materia de gestión de información en tiempo real, se observó una tendencia a la implementación de herramientas de operación en tiempo real con pronósticos para determinar la operación a corto plazo (involucrando el control de avenidas). La metodología de evaluación de estrategias propuesta en esta tesis se basa en determinar el comportamiento de éstas frente a un espectro de avenidas características de la solicitación hidrológica. Con ese fin, se combina un sistema de evaluación mediante indicadores y un entorno de generación estocástica de avenidas, obteniéndose un sistema implícitamente estocástico. El sistema de evaluación consta de tres etapas: caracterización, síntesis y comparación, a fin de poder manejar la compleja estructura de datos resultante y realizar la evaluación. En la primera etapa se definen variables de caracterización, vinculadas a los aspectos que se quieren evaluar (seguridad de la presa, control de inundaciones, generación de energía, etc.). Estas variables caracterizan el comportamiento del modelo para un aspecto y evento determinado. En la segunda etapa, la información de estas variables se sintetiza en un conjunto de indicadores, lo más reducido posible. Finalmente, la comparación se lleva a cabo a partir de la comparación de esos indicadores, bien sea mediante la agregación de dichos objetivos en un indicador único, o bien mediante la aplicación del criterio de dominancia de Pareto obteniéndose un conjunto de soluciones aptas. Esta metodología se aplicó para calibrar los parámetros de un modelo de optimización de embalse en laminación y su comparación con otra regla de operación, mediante el enfoque por agregación. Luego se amplió la metodología para evaluar y comparar reglas de operación existentes para el control de avenidas en embalses hidroeléctricos, utilizando el criterio de dominancia. La versatilidad de la metodología permite otras aplicaciones, tales como la determinación de niveles o volúmenes de seguridad, o la selección de las dimensiones del aliviadero entre varias alternativas. Por su parte, el entorno dinámico de laminación al presentar un enfoque combinado de optimización-simulación, permite aprovechar las ventajas de ambos tipos de modelos, facilitando la interacción con los operadores de las presas. Se mejoran los resultados respecto de los obtenidos con una regla de operación reactiva, aun cuando los pronósticos se desvían considerablemente del hidrograma real. Esto contribuye a reducir la tan mencionada brecha entre el desarrollo teórico y la aplicación práctica asociada a los modelos de gestión óptima de embalses. This thesis presents a methodological contribution to address the problem about how to operate a hydropower reservoir during floods in order to achieve an optimal management considering a multiobjective and stochastic approach. A methodology is proposed to assess the flood control strategies in a multiobjective and probabilistic framework. Additionally, a dynamic flood control environ was developed for real-time operation, including forecasts. This dynamic platform combines simulation and optimization models. These tools may assist to dam managers in the decision making process, regarding the most appropriate reservoir operation to be implemented. After a detailed review of the bibliography, it was observed that most of the existing studies in the sphere of flood control reservoir operation consider a reduce number of hydrographs to characterize the reservoir inflows. Consequently, the adequate functioning of a certain strategy may be limited to similar hydrologic scenarios. In the other hand, most of the works in this context tackle the problem of multipurpose flood control operation considering the entire flood season, lasting some months. These considerations differ from the real necessity in the Spanish context. The implementation of real-time reservoir operation is gaining popularity due to computational advances and improvements in real-time data management. The methodology proposed in this thesis for assessing the strategies is based on determining their behavior for a wide range of floods, which are representative of the hydrological forcing of the dam. An evaluation algorithm is combined with a stochastic flood generation system to obtain an implicit stochastic analysis framework. The evaluation system consists in three stages: characterizing, synthesizing and comparing, in order to handle the complex structure of results and, finally, conduct the evaluation process. In the first stage some characterization variables are defined. These variables should be related to the different aspects to be evaluated (such as dam safety, flood protection, hydropower, etc.). Each of these variables characterizes the behavior of a certain operating strategy for a given aspect and event. In the second stage this information is synthesized obtaining a reduced group of indicators or objective functions. Finally, the indicators are compared by means of an aggregated approach or by a dominance criterion approach. In the first case, a single optimum solution may be achieved. However in the second case, a set of good solutions is obtained. This methodology was applied for calibrating the parameters of a flood control model and to compare it with other operating policy, using an aggregated method. After that, the methodology was extent to assess and compared some existing hydropower reservoir flood control operation, considering the Pareto approach. The versatility of the method allows many other applications, such as determining the safety levels, defining the spillways characteristics, among others. The dynamic framework for flood control combines optimization and simulation models, exploiting the advantages of both techniques. This facilitates the interaction between dam operators and the model. Improvements are obtained applying this system when compared with a reactive operating policy, even if the forecasts deviate significantly from the observed hydrograph. This approach contributes to reduce the gap between the theoretical development in the field of reservoir management and its practical applications.
Resumo:
Las redes son la esencia de comunidades y sociedades humanas; constituyen el entramado en el que nos relacionamos y determinan cómo lo hacemos, cómo se disemina la información o incluso cómo las cosas se llevan a cabo. Pero el protagonismo de las redes va más allá del que adquiere en las redes sociales. Se encuentran en el seno de múltiples estructuras que conocemos, desde las interaciones entre las proteínas dentro de una célula hasta la interconexión de los routers de internet. Las redes sociales están presentes en internet desde sus principios, en el correo electrónico por tomar un ejemplo. Dentro de cada cliente de correo se manejan listas contactos que agregadas constituyen una red social. Sin embargo, ha sido con la aparición de los sitios web de redes sociales cuando este tipo de aplicaciones web han llegado a la conciencia general. Las redes sociales se han situado entre los sitios más populares y con más tráfico de la web. Páginas como Facebook o Twitter manejan cifras asombrosas en cuanto a número de usuarios activos, de tráfico o de tiempo invertido en el sitio. Pero las funcionalidades de red social no están restringidas a las redes sociales orientadas a contactos, aquellas enfocadas a construir tu lista de contactos e interactuar con ellos. Existen otros ejemplos de sitios que aprovechan las redes sociales para aumentar la actividad de los usuarios y su involucración alrededor de algún tipo de contenido. Estos ejemplos van desde una de las redes sociales más antiguas, Flickr, orientada al intercambio de fotografías, hasta Github, la red social de código libre más popular hoy en día. No es una casualidad que la popularidad de estos sitios web venga de la mano de sus funcionalidades de red social. El escenario es más rico aún, ya que los sitios de redes sociales interaccionan entre ellos, compartiendo y exportando listas de contactos, servicios de autenticación y proporcionando un valioso canal para publicitar la actividad de los usuarios en otros sitios web. Esta funcionalidad es reciente y aún les queda un paso hasta que las redes sociales superen su condición de bunkers y lleguen a un estado de verdadera interoperabilidad entre ellas, tal como funcionan hoy en día el correo electrónico o la mensajería instantánea. Este trabajo muestra una tecnología que permite construir sitios web con características de red social distribuída. En primer lugar, se presenta una tecnología para la construcción de un componente intermedio que permite proporcionar cualquier característica de gestión de contenidos al popular marco de desarrollo web modelo-vista-controlador (MVC) Ruby on Rails. Esta técnica constituye una herramienta para desarrolladores que les permita abstraerse de las complejidades de la gestión de contenidos y enfocarse en las particularidades de los propios contenidos. Esta técnica se usará también para proporcionar las características de red social. Se describe una nueva métrica de reusabilidad de código para demostrar la validez del componente intermedio en marcos MVC. En segundo lugar, se analizan las características de los sitios web de redes sociales más populares, con el objetivo de encontrar los patrones comunes que aparecen en ellos. Este análisis servirá como base para definir los requisitos que debe cumplir un marco para construir redes sociales. A continuación se propone una arquitectura de referencia que proporcione este tipo de características. Dicha arquitectura ha sido implementada en un componente, Social Stream, y probada en varias redes sociales, tanto orientadas a contactos como a contenido, en el contexto de una asociación vecinal tanto como en proyectos de investigación financiados por la UE. Ha sido la base de varios proyectos fin de carrera. Además, ha sido publicado como código libre, obteniendo una comunidad creciente y está siendo usado más allá del ámbito de este trabajo. Dicha arquitectura ha permitido la definición de un nuevo modelo de control de acceso social que supera varias limitaciones presentes en los modelos de control de acceso para redes sociales. Más aún, se han analizado casos de estudio de sitios de red social distribuídos, reuniendo un conjunto de caraterísticas que debe cumplir un marco para construir redes sociales distribuídas. Por último, se ha extendido la arquitectura del marco para dar cabida a las características de redes sociales distribuídas. Su implementación ha sido validada en proyectos de investigación financiados por la UE. Abstract Networks are the substance of human communities and societies; they constitute the structural framework on which we relate to each other and determine the way we do it, the way information is diseminated or even the way people get things done. But network prominence goes beyond the importance it acquires in social networks. Networks are found within numerous known structures, from protein interactions inside a cell to router connections on the internet. Social networks are present on the internet since its beginnings, in emails for example. Inside every email client, there are contact lists that added together constitute a social network. However, it has been with the emergence of social network sites (SNS) when these kinds of web applications have reached general awareness. SNS are now among the most popular sites in the web and with the higher traffic. Sites such as Facebook and Twitter hold astonishing figures of active users, traffic and time invested into the sites. Nevertheless, SNS functionalities are not restricted to contact-oriented social networks, those that are focused on building your own list of contacts and interacting with them. There are other examples of sites that leverage social networking to foster user activity and engagement around other types of content. Examples go from early SNS such as Flickr, the photography related networking site, to Github, the most popular social network repository nowadays. It is not an accident that the popularity of these websites comes hand-in-hand with their social network capabilities The scenario is even richer, due to the fact that SNS interact with each other, sharing and exporting contact lists and authentication as well as providing a valuable channel to publize user activity in other sites. These interactions are very recent and they are still finding their way to the point where SNS overcome their condition of data silos to a stage of full interoperability between sites, in the same way email and instant messaging networks work today. This work introduces a technology that allows to rapidly build any kind of distributed social network website. It first introduces a new technique to create middleware that can provide any kind of content management feature to a popular model-view-controller (MVC) web development framework, Ruby on Rails. It provides developers with tools that allow them to abstract from the complexities related with content management and focus on the development of specific content. This same technique is also used to provide the framework with social network features. Additionally, it describes a new metric of code reuse to assert the validity of the kind of middleware that is emerging in MVC frameworks. Secondly, the characteristics of top popular SNS are analysed in order to find the common patterns shown in them. This analysis is the ground for defining the requirements of a framework for building social network websites. Next, a reference architecture for supporting the features found in the analysis is proposed. This architecture has been implemented in a software component, called Social Stream, and tested in several social networks, both contact- and content-oriented, in local neighbourhood associations and EU-founded research projects. It has also been the ground for several Master’s theses. It has been released as a free and open source software that has obtained a growing community and that is now being used beyond the scope of this work. The social architecture has enabled the definition of a new social-based access control model that overcomes some of the limitations currenly present in access control models for social networks. Furthermore, paradigms and case studies in distributed SNS have been analysed, gathering a set of features for distributed social networking. Finally the architecture of the framework has been extended to support distributed SNS capabilities. Its implementation has also been validated in EU-founded research projects.
Resumo:
The broad host range plasmid RK2 replicates and regulates its copy number in a wide range of Gram-negative bacteria. The plasmid-encoded trans-acting replication protein TrfA and the origin of replication oriV are sufficient for controlled replication of the plasmid in all Gram-negative bacteria tested. The TrfA protein binds specifically to direct repeat sequences (iterons) at the origin of replication. A replication control model, designated handcuffing or coupling, has been proposed whereby the formation of coupled TrfA-oriV complexes between plasmid molecules results in hindrance of origin activity and, consequently, a shut-down of plasmid replication under conditions of higher than normal copy number. Therefore, according to this model, the coupling activity of an initiation protein is essential for copy number control and a copy-up initiation protein mutant should have reduced ability to form coupled complexes. To test this model for plasmid RK2, two previously characterized copy-up TrfA mutations, trfA-254D and trfA-267L, were combined and the resulting copy-up double mutant TFrfA protein TrfA-254D/267L was characterized. Despite initiating runaway (uncontrolled) replication in vivo, the copy-up double-mutant TrfA protein exhibited replication kinetics similar to the wild-type protein in vitro. Purified TrfA-254D, TrfA-267L, and TrfA-254D/267L proteins were then examined for binding to the iterons and for coupling activity using an in vitro ligase-catalyzed multimerization assay. It was found that both single and double TrfA mutant proteins exhibited substantially reduced (single mutants) or barely detectable (double mutant) levels of coupling activity while not being diminished in their capacity to bind to the origin of replication. These observations provide direct evidence in support of the coupling model of replication control.
Resumo:
In experiments reported elsewhere at this conference, we have revealed two striking results concerning binocular interactions in a masking paradigm. First, at low mask contrasts, a dichoptic masking grating produces a small facilitatory effect on the detection of a similar test grating. Second, the psychometric slope for dichoptic masking starts high (Weibull ß~4) at detection threshold, becomes low (ß~1.2) in the facilitatory region, and then unusually steep at high mask contrasts (ß~5.5). Neither of these results is consistent with Legge's (1984 Vision Research 24 385 - 394) model of binocular summation, but they are predicted by a two-stage gain control model in which interocular suppression precedes binocular summation. Here, we pose a further challenge for this model by using a 'twin-mask' paradigm (cf Foley, 1994 Journal of the Optical Society of America A 11 1710 - 1719). In 2AFC experiments, observers detected a patch of grating (1 cycle deg-1, 200 ms) presented to one eye in the presence of a pedestal in the same eye and a spatially identical mask in the other eye. The pedestal and mask contrasts varied independently, producing a two-dimensional masking space in which the orthogonal axes (10X10 contrasts) represent conventional dichoptic and monocular masking. The resulting surface (100 thresholds) confirmed and extended the observations above, and fixed the six parameters in the model, which fitted the data well. With no adjustment of parameters, the model described performance in a further experiment where mask and test were presented to both eyes. Moreover, in both model and data, binocular summation was greater than a factor of v2 at detection threshold. We conclude that this two-stage nonlinear model, with interocular suppression, gives a good account of early binocular processes in the perception of contrast. [Supported by EPSRC Grant Reference: GR/S74515/01]
Resumo:
The ability to distinguish one visual stimulus from another slightly different one depends on the variability of their internal representations. In a recent paper on human visual-contrast discrimination, Kontsevich et al (2002 Vision Research 42 1771 - 1784) re-considered the long-standing question whether the internal noise that limits discrimination is fixed (contrast-invariant) or variable (contrast-dependent). They tested discrimination performance for 3 cycles deg-1 gratings over a wide range of incremental contrast levels at three masking contrasts, and showed that a simple model with an expansive response function and response-dependent noise could fit the data very well. Their conclusion - that noise in visual-discrimination tasks increases markedly with contrast - has profound implications for our understanding and modelling of vision. Here, however, we re-analyse their data, and report that a standard gain-control model with a compressive response function and fixed additive noise can also fit the data remarkably well. Thus these experimental data do not allow us to decide between the two models. The question remains open. [Supported by EPSRC grant GR/S74515/01]
Resumo:
It is very well known that contrast detection thresholds improve with the size of a grating-type stimulus, but it is thought that the benefit of size is abolished for contrast discriminations well above threshold (e.g., Legge, G. E., & Foley, J. M. (1980)]. Here we challenge the generality of this view. We performed contrast detection and contrast discrimination for circular patches of sine wave grating as a function of stimulus size. We confirm that sensitivity improves with approximately the fourth-root of stimulus area at detection threshold (a log-log slope of -0.25) but find individual differences (IDs) for the suprathreshold discrimination task. For several observers, performance was largely unaffected by area, but for others performance first improved (by as much as a log-log slope of -0.5) and then reached a plateau. We replicated these different results several times on the same observers. All of these results were described in the context of a recent gain control model of area summation [Meese, T. S. (2004)], extended to accommodate the multiple stimulus sizes used here. In this model, (i) excitation increased with the fourth-root of stimulus area for all observers, and (ii) IDs in the discrimination data were described by IDs in the relation between suppression and area. This means that empirical summation in the contrast discrimination task can be attributed to growth in suppression with stimulus size that does not keep pace with the growth in excitation. © 2005 ARVO.