977 resultados para Naka-Rushton equation
Resumo:
Tone Mapping is the problem of compressing the range of a High-Dynamic Range image so that it can be displayed in a Low-Dynamic Range screen, without losing or introducing novel details: The final image should produce in the observer a sensation as close as possible to the perception produced by the real-world scene. We propose a tone mapping operator with two stages. The first stage is a global method that implements visual adaptation, based on experiments on human perception, in particular we point out the importance of cone saturation. The second stage performs local contrast enhancement, based on a variational model inspired by color vision phenomenology. We evaluate this method with a metric validated by psychophysical experiments and, in terms of this metric, our method compares very well with the state of the art.
Resumo:
The transducer function mu for contrast perception describes the nonlinear mapping of stimulus contrast onto an internal response. Under a signal detection theory approach, the transducer model of contrast perception states that the internal response elicited by a stimulus of contrast c is a random variable with mean mu(c). Using this approach, we derive the formal relations between the transducer function, the threshold-versus-contrast (TvC) function, and the psychometric functions for contrast detection and discrimination in 2AFC tasks. We show that the mathematical form of the TvC function is determined only by mu, and that the psychometric functions for detection and discrimination have a common mathematical form with common parameters emanating from, and only from, the transducer function mu and the form of the distribution of the internal responses. We discuss the theoretical and practical implications of these relations, which have bearings on the tenability of certain mathematical forms for the psychometric function and on the suitability of empirical approaches to model validation. We also present the results of a comprehensive test of these relations using two alternative forms of the transducer model: a three-parameter version that renders logistic psychometric functions and a five-parameter version using Foley's variant of the Naka-Rushton equation as transducer function. Our results support the validity of the formal relations implied by the general transducer model, and the two versions that were contrasted account for our data equally well.
Resumo:
Purpose: Contact lens electrodes (CLEs) are frequently used to register electroretinograms (ERGs) in small animals such as mice or rats. CLEs are expensive to buy or difficult to be produced individually. In addition, CLE`s have been noticed to elicit inconstant results and they carry potential to injure the cornea. Therefore, a new electrode holder was constructed based on the clinically used DTL-electrode and compared to CLEs. Material and methods: ERGs were recorded with both electrode types in nine healthy Brown-Norway rats under scotopic conditions. For low intensity responses a Naka-Rushton function was fitted and the parameters V(max), k and n were analyzed. The a-wave, b-wave and oscillatory potentials were analyzed for brighter flash intensities (1-60 scot cd s/m(2)). Repeatability was assessed for both electrode types in consecutive measurements. Results: The new electrode holder was faster in setting up than the CLE and showed lower standard deviations. No corneal alterations were observed. Slightly higher amplitudes were recorded in most of the measurements with the new electrode holder (except amplitudes induced by 60 cd s/m(2)). A Bland-Altman test showed good agreement between the DTL holder and the CLE (mean difference 35.2 mu V (Holder-CLE)). Pearson`s correlation coefficient for test-retest-reliability was r = 0.783. Conclusions: The DTL holder was superior in handling and caused far less corneal problems than the CLE and produced comparable or better electrophysiological results. The minimal production costs and the possibility of adapting the DTL holder to bigger eyes, such as for dogs or rabbits, offers with broader application prospects. (C) 2010 Elsevier B.V. All rights reserved.
Resumo:
Die Messung der Stärke von Empfindungen hat in der Psychologie eine lange Tradition, die bis in die Zeit der Entstehung der Psychologie als eine eigenständige Wissenschaft gegen Ende des 19. Jahrhunderts zurückreicht. Gustav Theodor Fechner verband die Beobachtung Webers der Konstanz des Koeffizienten des eben merklichen Unterschieds zu der Vergleichsintensität (sog. "Weber-Quotient") mit der Annahme einer sensorischen Schwelle, und entwickelte daraus erstmals eine Skala für die Stärke von Empfindungen. Die Fechner-Skala verwendet die Anzahl sukzessiver Schwellenschritte als natürliche, psychologische Einheit. Die Stärke einer Empfindung für eine gegebene Reizintensität wird ausgedrückt als die Anzahl von Schwellenschritten, die man gehen muss, um von keiner Empfindung bis zur in Frage stehenden Empfindung zu gelangen. Die Funktion, die den Zusammenhang von Reizintensität und der Anzahl nötiger Schwellenschritte beschreibt, ist stets logarithmisch und über sukzessive Schwellenmessungen für Reize aus den verschiedensten Sinnesmodalitäten bestimmbar. Derart sich ergebende Skalierungen heißen "indirekt", weil die in Frage stehende Reizintensität selbst nicht von der Urteilsperson bewertet wird. Intensitäten sind vom Urteiler nur mit anderen Intensitäten in Bezug auf ein "stärker" oder "schwächer", also ordinal, zu vergleichen. Indirekte Skalierungsmethoden eignen sich insbesondere, wenn der Reizeindruck flüchtig und von der absoluten Stärke her schwer durch den Urteiler zu quantifizieren ist. Ein typisches Beispiel hierfür ist die Auffälligkeit (Salienz) von visuellen Objekten, die in zufällig wechselnde Hintergründe eingebettet sind und dem Betrachter nur als ein rasches raumzeitliches Aufblitzen präsentiert werden. Die Stärke des Unterschieds in Merkmalen wie Helligkeit, Farbe, Orientierung, Schattierung, Form, Krümmung, oder Bewegung bestimmt das Ausmaß der Salienz von Objekten. Obschon eine Fülle von Arbeiten existiert zu der Frage, welche Merkmale und deren Kombinationen ohne Wissen des Ortes ihrer Präsentation automatisch starke Salienz ("Pop-Out") erzeugen, existieren bislang keine systematischen Versuche, die Salienz von Merkmalen für einen weiten Bereich von Merkmalsunterschieden zu erfassen und vergleichbar zu machen. Indirekte Skalierungen liegen vor für die Merkmale Kontrast (Legge und Foley, 1980) und Orientierung (Motoyoshi und Nishida, 2001). Ein Vergleich der Salienz über mehrere Merkmale und der Nachweis, dass die Salienz eine eigene, von der Merkmalsdimension unabhängige sensorische Qualität ist, steht aber bislang aus. In der vorliegenden Arbeit wird gezeigt, dass der Unterschied von Objekten zur einbettenden Umgebung hinsichtlich visueller Merkmale zu Salienz führt und diese Salienz unabhängig von dem sie erzeugenden Merkmal der Stärke nach skalierbar ist. Es wird ferner gezeigt, dass die Einheiten der für zwei Merkmale erhobenen indirekten Skalierungsfunktionen in einem absoluten Sinne gleich sind, solange sichergestellt ist, dass (i) keine alternativen Hinweisreize existieren und nur der reine Merkmalsunterschied von Objekt und Umgebung bewertet wird und (ii) das sensorische Rauschen in den aktivierten Merkmalskanälen für beide Merkmale gleich ist. Für diesen Aufweis wurden exemplarisch die Merkmale Orientierung und Ortsfrequenz ausgewählt und die Salienz ihrer Merkmalskontraste über Naka-Rushton-Funktionen, gewonnen aus den zugrundeliegenden Salienz-Inkrementschwellenmessungen, indirekt skaliert. Für das Merkmal Ortsfrequenz liegt hiermit erstmals eine indirekte Skalierung vor. Hierfür musste eine spezielle Messtechnik entwickelt werden, die die Bewertung reiner Ortsfrequenzunterschiede, frei von konfundierenden absoluten Ausprägungen der Ortsfrequenzen, sicherstellt. Die Methode ist in Kapitel 7 dargestellt. Experimente, die die konfundierende Wirkung absoluter Merkmalsausprägungen auf die Salienzmessung demonstrieren, sind in Kapitel 6 dargestellt. In Kapitel 8 findet sich ein empirischer Abgleich der Ergebnisse von Inkrement- und Dekrementschwellenmessungen, eine Messtechnik, die zur Erfassung von Unterschiedsschwellen im Extrembereich der Orientierungsunterschiede von 90° nötig ist. Kapitel 9 enthält den empirischen Aufweis der Transitivität der Gleichheitsrelation für Salienzmessungen von Orientierung und Ortsfrequenz durch Abgleich mit einem dritten Merkmal und erbringt damit den Beleg der merkmalsunabhängigen Erfassung von Auffälligkeit über die indirekte Skalierungsmethodik. Ferner wird dort die Wirksamkeit der Grundsalienz von Mustern, gegeben über externes Rauschen in den Merkmalen (sog. "Merkmalsjitter") für die Verschiebung des Nullpunktes der Skalierungsfunktion aufgezeigt. Im letzten Experiment (Kapitel 10) wird dann die Skalierung von Orientierung und Ortsfrequenz bei gleicher Grundsalienz der Muster verglichen und gezeigt, dass beide Skalen in einem absoluten Sinne gleiche Einheiten aufweisen (also gleiche Skalenzahlen gleiche sensorische Auffälligkeiten anzeigen, obwohl sie von verschiedenen Merkmalen stammen), wenn der Effekt des sensorischen Rauschens, der im Merkmal Orientierung nicht über die verschiedenen Schwellenschritte konstant ist, kompensiert wird. Die Inkonstanz des Effektes des sensorischen Rauschens im Merkmal Orientierung wird über die Veränderung der Steigung der psychometrischen Präferenzfunktion für die Vergleichsurteile der Orientierungssalienz für eine fest vorgegebene Ortsfrequenzsalienz greifbar, und der Effekt der Steigungsveränderung kompensiert exakt die Nichtlinearität in der für beide Merkmale erhobenen Salienz-Matchingfunktion. Im letzten Kapitel wird ein Ausblick auf eine mögliche Modellierung der Salienzfunktionen über klassische Multikanal-Feedforwardmodelle gegeben. In den ersten fünf Kapiteln sind einführend die Gebiete der indirekten Skalierung, der Merkmalssalienz und der Texturtrennung im menschlichen visuellen System dargestellt.
Resumo:
Accelerated stability tests are indicated to assess, within a short time, the degree of chemical degradation that may affect an active substance, either alone or in a formula, under normal storage conditions. This method is based on increased stress conditions to accelerate the rate of chemical degradation. Based on the equation of the straight line obtained as a function of the reaction order (at 50 and 70 ºC) and using Arrhenius equation, the speed of the reaction was calculated for the temperature of 20 ºC (normal storage conditions). This model of accelerated stability test makes it possible to predict the chemical stability of any active substance at any given moment, as long as the method to quantify the chemical substance is available. As an example of the applicability of Arrhenius equation in accelerated stability tests, a 2.5% sodium hypochlorite solution was analyzed due to its chemical instability. Iodometric titration was used to quantify free residual chlorine in the solutions. Based on data obtained keeping this solution at 50 and 70 ºC, using Arrhenius equation and considering 2.0% of free residual chlorine as the minimum acceptable threshold, the shelf-life was equal to 166 days at 20 ºC. This model, however, makes it possible to calculate shelf-life at any other given temperature.
Resumo:
In this paper we study the existence and regularity of mild solutions for a class of abstract partial neutral integro-differential equations with unbounded delay.
Resumo:
Using the solutions of the gap equations of the magnetic-color-flavor-locked (MCFL) phase of paired quark matter in a magnetic field, and taking into consideration the separation between the longitudinal and transverse pressures due to the field-induced breaking of the spatial rotational symmetry, the equation of state of the MCFL phase is self-consistently determined. This result is then used to investigate the possibility of absolute stability, which turns out to require a field-dependent ""bag constant"" to hold. That is, only if the bag constant varies with the magnetic field, there exists a window in the magnetic field vs bag constant plane for absolute stability of strange matter. Implications for stellar models of magnetized (self-bound) strange stars and hybrid (MCFL core) stars are calculated and discussed.
Resumo:
We analyze the irreversibility and the entropy production in nonequilibrium interacting particle systems described by a Fokker-Planck equation by the use of a suitable master equation representation. The irreversible character is provided either by nonconservative forces or by the contact with heat baths at distinct temperatures. The expression for the entropy production is deduced from a general definition, which is related to the probability of a trajectory in phase space and its time reversal, that makes no reference a priori to the dissipated power. Our formalism is applied to calculate the heat conductance in a simple system consisting of two Brownian particles each one in contact to a heat reservoir. We show also the connection between the definition of entropy production rate and the Jarzynski equality.
Resumo:
We present a derivation of the Redfield formalism for treating the dissipative dynamics of a time-dependent quantum system coupled to a classical environment. We compare such a formalism with the master equation approach where the environments are treated quantum mechanically. Focusing on a time-dependent spin-1/2 system we demonstrate the equivalence between both approaches by showing that they lead to the same Bloch equations and, as a consequence, to the same characteristic times T(1) and T(2) (associated with the longitudinal and transverse relaxations, respectively). These characteristic times are shown to be related to the operator-sum representation and the equivalent phenomenological-operator approach. Finally, we present a protocol to circumvent the decoherence processes due to the loss of energy (and thus, associated with T(1)). To this end, we simply associate the time dependence of the quantum system to an easily achieved modulated frequency. A possible implementation of the protocol is also proposed in the context of nuclear magnetic resonance.
Resumo:
In this work, a new boundary element formulation for the analysis of plate-beam interaction is presented. This formulation uses a three nodal value boundary elements and each beam element is replaced by its actions on the plate, i.e., a distributed load and end of element forces. From the solution of the differential equation of a beam with linearly distributed load the plate-beam interaction tractions can be written as a function of the nodal values of the beam. With this transformation a final system of equation in the nodal values of displacements of plate boundary and beam nodes is obtained and from it, all unknowns of the plate-beam system are obtained. Many examples are analyzed and the results show an excellent agreement with those from the analytical solution and other numerical methods. (C) 2009 Elsevier Ltd. All rights reserved.
Resumo:
This note addresses the relation between the differential equation of motion and Darcy`s law. It is shown that, in different flow conditions, three versions of Darcy`s law can be rigorously derived from the equation of motion.
Resumo:
It is well known that structures subjected to dynamic loads do not follow the usual similarity laws when the material is strain rate sensitive. As a consequence, it is not possible to use a scaled model to predict the prototype behaviour. In the present study, this problem is overcome by changing the impact velocity so that the model behaves exactly as the prototype. This exact solution is generated thanks to the use of an exponential constitutive law to infer the dynamic flow stress. Furthermore, it is shown that the adopted procedure does not rely on any previous knowledge of the structure response. Three analytical models are used to analyze the performance of the technique. It is shown that perfect similarity is achieved, regardless of the magnitude of the scaling factor. For the class of material used, the solution outlined has long been sought, inasmuch as it allows perfect similarity for strain rate sensitive structures subject to impact loads. (C) 2009 Elsevier Ltd. All rights reserved.
Resumo:
In this work, a study on the role of the long-range term of excess Gibbs energy models in the modeling of aqueous systems containing polymers and salts is presented. Four different approaches on how to account for the presence of polymer in the long-range term were considered, and simulations were conducted considering aqueous solutions of three different salts. The analysis of water activity curves showed that, in all cases, a liquid-phase separation may be introduced by the sole presence of the polymer in the long-range term, regardless of how it is taken into account. The results lead to the conclusion that there is no single exact solution for this problem, and that any kind of approach may introduce inconsistencies.
Resumo:
Pitzer`s equation for the excess Gibbs energy of aqueous solutions of low-molecular electrolytes is extended to aqueous solutions of polyelectrolytes. The model retains the original form of Pitzer`s model (combining a long-range term, based on the Debye-Huckel equation, with a short-range term similar to the virial equation where the second osmotic virial coefficient depends on the ionic strength). The extension consists of two parts: at first, it is assumed that a constant fraction of the monomer units of the polyelectrolyte is dissociated, i.e., that fraction does not depend on the concentration of the polyelectrolyte, and at second, a modified expression for the ionic strength (wherein each charged monomer group is taken into account individually) is introduced. This modification is to account for the presence of charged polyelectrolyte chains, which cannot be regarded as punctual charges. The resulting equation was used to correlate osmotic coefficient data of aqueous solutions of a single polyelectrolyte as well as of binary mixtures of a single polyelectrolyte and a salt with low-molecular weight. It was additionally applied to correlate liquid-liquid equilibrium data of some aqueous two-phase systems that might form when a polyelectrolyte and another hydrophilic but neutral polymer are simultaneously dissolved in water. A good agreement between the experimental data and the correlation result is observed for all investigated systems. (c) 2008 Elsevier B.V. All rights reserved.
Resumo:
A method based on a specific power-law relationship between the hydraulic head and the Boltzmann variable was recently presented. We generalized this relationship to a range of powers and extended the solution to include the saturated zone. As a result, the new solution satisfies the Bruce and Klute equation exactly.