921 resultados para weak signals


Relevância:

60.00% 60.00%

Publicador:

Resumo:

Tämän pro gradu –tutkielman tavoitteena oli selvittää, miten organisaatiossa voidaan hyödyntää heikkojen signaalien analysointia ja ennakointia osana strategista päätöksentekoa. Lisäksi tutkimuksessa pyrittiin luomaan ymmärrys tutkimuksen keskeisten käsitteiden määritelmien moninaisesta kentästä sekä yhdistämään heikkojen signaalien analysointi ennakointiin ja strategiseen päätöksentekoon. Tutkimusstrategiana oli tapaustutkimus, mikä sisältää laadullisen aineiston ja analyysin. Laadullinen aineisto kerättiin kohdeorganisaatiosta puolistrukturoidulla teemahaastattelulla ja analysoitiin teemoittelun avulla. Tutkimustulosten perusteella voidaan todeta, että heikkojen signaalien analysointi on yksi ennakoinnin menetelmistä. Heikkojen signaalien analysoinnilla ja ennakoinnilla voidaan tukea strategista päätöksentekoa ja jopa parantaa päätösten laatua ottamalla huomioon mahdolliset tulevat muutokset toimintaympäristössä. Tulosten perusteella heikkojen signaalien analysointia ja ennakointia tulisi toteuttaa systemaattisesti ja tiedostetusti organisaatiossa.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

The objective of this study was to increase understanding of the link between the identification of required HR competences and competence management alignment with business strategy in a Finnish, global company employing over 8,000 people and about 100 HR professionals. This aim was approached by analyzing the data collected in focus group interviews using a grounded theory method and in parallel reviewing the literature of strategic human resource management, competence-based strategic management, strategy and foresight. The literature on competence management in different contexts dismisses in-depth discussions on the foresight process and individuals are often forgotten in strategic frameworks. However, corporate foresight helps in the detection of emerging opportunities for innovations and in the implementation of strategy. The empirical findings indicate a lack of strategic leadership and an alignment with HR and business. Accordingly, the most important HR competence areas identified were the need for increasing business understanding and enabling change. As a result, the study provided a holistic model for competence foresight, which introduces HR professionals as strategic change agents in the role of organizational futurists at the heart of the company: facilitating competence foresight and competence development on individual as well as organizational levels, resulting in an agile organization with increased business understanding, sensitive sensors and adaptive actions to enable change.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Tässä kandidaatin työssä tutkitaan mahdollisuuksien ja trendien tunnistamista sekä niiden hyödyntämistä osana yrityksen strategista suunnittelua. This study examines identification of opportunities and trends and utilisation of those factors in strategic planning.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Suurin osa Suomen vuokrataloyhtiöistä perustettiin 60- ja 70-luvuilla. Tuolloin oli selkeä kuva asuntojen tarpeesta, sijainneista, koosta ja varustetasosta. Asuntotuotantoon kohdistetut rahoitusmuodot koettiin onnistuneiksi ja toimiviksi. Toimintaympäristön muutokset ovat tänään huomattavasti nopeampia kuin tutkimuksen kohteena olevan yhtiön perustamisen aikaan, yli neljäkymmentä vuotta sitten. Vuokrataloyhtiön on pystyttävä varautumaan yllättäviin muutoksiin ja pyrkiä tunnistamaan muuttuvat elementit. Tämä diplomityö pyrkii selvittämään, mitä vaihtoehtoisia tulevaisuudenkuvia on olemassa. Työ perustuu samalla tulevaisuudentutkimuksen kirjallisuusteoriaan ja sen peruslähtökohtiin. Empiirinen osuus perustuu tutkittavan vuokrataloyhtiön historiaan, nykyisyyteen ja tulevaisuuteen tehtyjen ratkaisujen dokumentteihin. Vuokrataloyhtiön ympäristö on täynnä tunnistettavia ominaisuuksia, verkostoja ja rakenteellisia aukkoja. Ympäristöstä tulee osata poimia ne asiat, joiden perusteella eri skenaario- ja ennakointimenetelmät laaditaan. Ennakointi ei ole ennustamista. Skenaario - ja ennakointimenetelmien hyödyntäminen on avoin prosessi, joka sisältää tutkimuskysymyksiä, lähtökohtia, joita tarkennetaan prosessin edetessä. Vuokrataloyhtiön on osattava hyödyntää nämä menetelmät osana strategian suunnitteluaan, jotta se pärjää myös tulevaisuudessa muuttuvassa toimintaympäristössään.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

This study explores the decadal potential predictability of the Atlantic Meridional Overturning Circulation (AMOC) as represented in the IPSL-CM5A-LR model, along with the predictability of associated oceanic and atmospheric fields. Using a 1000-year control run, we analyze the prognostic potential predictability (PPP) of the AMOC through ensembles of simulations with perturbed initial conditions. Based on a measure of the ensemble spread, the modelled AMOC has an average predictive skill of 8 years, with some degree of dependence on the AMOC initial state. Diagnostic potential predictability of surface temperature and precipitation is also identified in the control run and compared to the PPP. Both approaches clearly bring out the same regions exhibiting the highest predictive skill. Generally, surface temperature has the highest skill up to 2 decades in the far North Atlantic ocean. There are also weak signals over a few oceanic areas in the tropics and subtropics. Predictability over land is restricted to the coastal areas bordering oceanic predictable regions. Potential predictability at interannual and longer timescales is largely absent for precipitation in spite of weak signals identified mainly in the Nordic Seas. Regions of weak signals show some dependence on AMOC initial state. All the identified regions are closely linked to decadal AMOC fluctuations suggesting that the potential predictability of climate arises from the mechanisms controlling these fluctuations. Evidence for dependence on AMOC initial state also suggests that studying skills from case studies may prove more useful to understand predictability mechanisms than computing average skill from numerous start dates.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Conselho Nacional de Desenvolvimento Científico e Tecnológico (CNPq)

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Platelets are known to contain platelet factor 4 and beta-thromboglobulin, alpha-chemokines containing the CXC motif, but recent studies extended the range to the beta-family characterized by the CC motif, including RANTES and Gro-alpha. There is also evidence for expression of chemokine receptors CCR4 and CXCR4 in platelets. This study shows that platelets have functional CCR1, CCR3, CCR4, and CXCR4 chemokine receptors. Polymerase chain reaction detected chemokine receptor messenger RNA in platelet RNA. CCR1, CCR3, and especially CCR4 gave strong signals; CXCR1 and CXCR4 were weakly positive. Flow cytometry with specific antibodies showed the presence of a clear signal for CXCR4 and weak signals for CCR1 and CCR3, whereas CXCR1, CXCR2, CXCR3, and CCR5 were all negative. Immunoprecipitation and Western blotting with polyclonal antibodies to cytoplasmic peptides clearly showed the presence of CCR1 and CCR4 in platelets in amounts comparable to monocytes and CCR4 transfected cells, respectively. Chemokines specific for these receptors, including monocyte chemotactic protein 1, macrophage inflammatory peptide 1alpha, eotaxin, RANTES, TARC, macrophage-derived chemokine, and stromal cell-derived factor 1, activate platelets to give Ca(++) signals, aggregation, and release of granule contents. Platelet aggregation was dependent on release of adenosine diphosphate (ADP) and its interaction with platelet ADP receptors. Part, but not all, of the Ca(++) signal was due to ADP release feeding back to its receptors. Platelet activation also involved heparan or chondroitin sulfate associated with the platelet surface and was inhibited by cleavage of these glycosaminoglycans or by heparin or low molecular weight heparin. These platelet receptors may be involved in inflammatory or allergic responses or in platelet activation in human immunodeficiency virus infection.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

This paper investigates a novel method which allows clutter elimination in deep optoacoustic imaging. Clutter significantly limits imaging depth in clinical optoacoustic imaging, when irradiation optics and ultrasound detector are integrated in a handheld probe for flexible imaging of the human body. Strong optoacoustic transients generated at the irradiation site obscure weak signals from deep inside the tissue, either directly by propagating towards the probe, or via acoustic scattering. In this study we demonstrate that signals of interest can be distinguished from clutter by tagging them at the place of origin with localised tissue vibration induced by the acoustic radiation force in a focused ultrasonic beam. We show phantom results where this technique allowed almost full clutter elimination and thus strongly improved contrast for deep imaging. Localised vibration tagging by means of acoustic radiation force is especially promising for integration into ultrasound systems that already have implemented radiation force elastography.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

En el presente trabajo de tesis se afronta el problema de la optimización de la superficie de grandes antenas reflectoras. Es sabido que los grandes reflectores, formados por una superficie panelada, sufren deformaciones debidas al impacto del viento, a los cambios de temperatura y a los efectos gravitacionales derivados del gran peso de la estructura. Estos efectos hacen que los reflectores pierdan su forma ideal, generalmente de paraboloide, y se reduzca su eficiencia de apertura y, por tanto, se limite la máxima frecuencia de uso de los mismos. Es necesario, por tanto, disponer de técnicas que permitan medir el estado de la superficie de grandes reflectores, y derivar los ajustes necesarios a aplicar sobre los tornillos de soporte de cada uno de los paneles que conforman dicha superficie. De esta manera, se devolvería al reflector su forma óptima y aumentaría la eficiencia de apertura y el rango de frecuencias de uso. Hay que resaltar que el aumento de la eficiencia de un radiotelescopio supone una reducción en el tiempo de integración necesario para la detección de las debilísimas señales generadas por las radiofuentes naturales, ahorrando así valioso tiempo de observación. Además, el incremento en el rango de frecuencias permite la detección de nuevas líneas o especies moleculares en dichas radiofuentes. Tras un primer capítulo introductorio, se presenta, en el capítulo segundo, la geometría de estos grandes reflectores y la influencia de los distintos factores que afectan a la calidad de la superficie de los mismos, como la gravedad, el viento y la temperatura, particularizando para el caso del radiotelescopio de 40 metros del Centro Astronómico de Yebes. En el tercer capítulo, se presentan las diferentes técnicas metrológicas empleadas actualmente para abordar la determinación de estos ajustes, mostrándose las ventajas e inconvenientes de cada una de ellas. Actualmente, la técnica metrológica más precisa y rápida para llevar a cabo esta tarea de caracterización de la superficie de un gran reflector, es la radio-holografía de microondas presentada en el capítulo cuarto. A partir de las medidas proporcionadas por esta técnica, realizadas con la ayuda de un transmisor, y mediante transformaciones de campo, se calculan los errores de la superficie del reflector, respecto al paraboloide ideal, y se derivan los ajustes necesarios. En los capítulos quinto y sexto se presentan los resultados de la aplicación de esta técnica a dos radiotelescopios: el de 30 metros de IRAM en Pico de Veleta (Granada) y los prototipos de 12 metros de las antenas del proyecto ALMA. Por su parte, el capítulo séptimo contiene el núcleo fundamental de esta tesis y presenta el desarrollo de la técnica de radio-holografía de microondas para optimizar la superficie del radiotelescopio de 40 metros del Centro Astronómico de Yebes. Para ello, ha sido necesario diseñar, construir e instalar un receptor de doble canal en banda Ku en foco primario, y la instrumentación asociada para hacer las medidas de amplitud y fase del diagrama de radiación. Además, ha sido necesario desarrollar el software para llevar a cabo las transformaciones de campo y derivar los ajustes de los paneles. De las medidas holográficas iniciales resultó un error de la superficie del radiotelescopio de 485 μm WRMS, respecto al paraboloide ideal en dirección normal. Tras varias iteraciones del proceso de medida y ajuste, se consiguió reducir dicho error a 194 μm WRMS. Esta notable mejora de la calidad de la superficie ha supuesto aumentar la eficiencia de apertura desde 2,6% al 38,2% a 86 GHz, para un receptor a esta frecuencia situado en el foco primario que produjese la misma iluminación que el receptor de holografía. In this thesis the problem of large reflector antenna surface optimization is faced. It is well known that large reflectors, which are made of a panelled surface, suffer from deformations due to the impact of wind, temperature gradients and gravity loads coming from the high weigth of the structure. These effects distort the ideal reflector shape, which is a paraboloid in most cases, hence reducing the aperture efficiency of the reflector and limiting the maximum frequency of operation. Therefore, it is necessary to have some techniques to measure the status of large reflector surfaces and to derive the adjustment values to be applied to the screws that connect the surface panels to the reflector back-up structure. In this way, the reflector would recover its optimum shape and the aperture efficiency and frequency range would increase. It has to be stated that an increment in the radiotelescope aperture efficiency would imply a reduction in the integration time needed to detect such weak signals coming from natural radiosources in space and, hence, an important saving in observation time. In addition, the increase in the frequency range of operation would allow the detection of new molecular lines in those radiosources. After the introduction, the second chapter shows the geometry of large reflector antennas and the impact on its surface quality of different factors like gravity, wind and temperature, particularly for the case of the Centro Astronómico de Yebes 40 meter radiotelescope. The third chapter deals with the different metrology techniques used to determine the panel adjustments, including the advantages and drawbacks of each one Currently, the most accurate and fast metrologic technique to carry out the characterization of large reflector surfaces is microwave radio-holography2, which is shown in chapter four. From the measurements provided by microwave radio-holography, performed with the help of a transmitter, and with the use of field transformations, the reflector surface errors are computed and the panel adjustments are derived. Chapters five and six show the results of holographic measurements applied to two first class radiotelescopes: the IRAM 30 meter radiotelescope and the 12 meter prototype antennas for the ALMA project. Chapter seven contains the main work of this thesis. It presents the development of the microwave radio-holography technique for the optimization of the Centro Astronómico de Yebes 40m radiotelescope. The work implied the design, construction and instalation of a prime focus Ku-band dual channel receiver, together with the associated instrumentation to measure the amplitude and phase of the radiotelescope radiation pattern. In addition, the software to carry out field transformations and screw settings computations was developed too. Initial holography measurements came up with an surface error of 485 μmWRMS in normal direction with respect to the best-fit paraboloid. After a few iterations of the measurementadjustment cycle, the surface error was reduced to 194 μm WRMS. This remarkable improvement in surface quality means an increment in aperture efficiency from 2,6% to 38,2% at 86 GHz, assuming a receiver at this frequency in prime focus position which produces the same illumination as the holography receiver.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

To perform Quantum Key Distribution, the mastering of the extremely weak signals carried by the quantum channel is required. Transporting these signals without disturbance is customarily done by isolating the quantum channel from any noise sources using a dedicated physical channel. However, to really profit from this technology, a full integration with conventional network technologies would be highly desirable. Trying to use single photon signals with others that carry an average power many orders of magnitude bigger while sharing as much infrastructure with a conventional network as possible brings obvious problems. The purpose of the present paper is to report our efforts in researching the limits of the integration of QKD in modern optical networks scenarios. We have built a full metropolitan area network testbed comprising a backbone and an access network. The emphasis is put in using as much as possible the same industrial grade technology that is actually used in already installed networks, in order to understand the throughput, limits and cost of deploying QKD in a real network.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Of the rules used by the splicing machinery to precisely determine intron–exon boundaries only a fraction is known. Recent evidence suggests that specific short sequences within exons help in defining these boundaries. Such sequences are known as exonic splicing enhancers (ESE). A possible bioinformatical approach to studying ESE sequences is to compare genes that harbor introns with genes that do not. For this purpose two non-redundant samples of 719 intron-containing and 63 intron-lacking human genes were created. We performed a statistical analysis on these datasets of intron-containing and intron-lacking human coding sequences and found a statistically significant difference (P = 0.01) between these samples in terms of 5–6mer oligonucleotide distributions. The difference is not created by a few strong signals present in the majority of exons, but rather by the accumulation of multiple weak signals through small variations in codon frequencies, codon biases and context-dependent codon biases between the samples. A list of putative novel human splicing regulation sequences has been elucidated by our analysis.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Anyone who looks at the title of this special issue will agree that the intent behind the preparation of this volume was ambitious: to predict and discuss “The Future of Manufacturing”. Will manufacturing be important in the future? Even though some sceptics might say not, and put on the table some old familiar arguments, we would strongly disagree. To bring subsidies for the argument we issued the call-for-papers for this special issue of Journal of Manufacturing Technology Management, fully aware of the size of the challenge in our hands. But we strongly believed that the enterprise would be worthwhile. The point of departure is the ongoing debate concerning the meaning and content of manufacturing. The easily visualised internal activity of using tangible resources to make physical products in factories is no longer a viable way to characterise manufacturing. It is now a more loosely defined concept concerning the organisation and management of open, interdependent, systems for delivering goods and services, tangible and intangible, to diverse types of markets. Interestingly, Wickham Skinner is the most cited author in this special issue of JMTM. He provides the departure point of several articles because his vision and insights have guided and inspired researchers in production and operations management from the late 1960s until today. However, the picture that we draw after looking at the contributions in this special issue is intrinsically distinct, much more dynamic, and complex. Seven articles address the following research themes: 1.new patterns of organisation, where the boundaries of firms become blurred and the role of the firm in the production system as well as that of manufacturing within the firm become contingent; 2.new approaches to strategic decision-making in markets characterised by turbulence and weak signals at the customer interface; 3.new challenges in strategic and operational decisions due to changes in the profile of the workforce; 4.new global players, especially China, modifying the manufacturing landscape; and 5.new techniques, methods and tools that are being made feasible through progress in new technological domains. Of course, many other important dimensions could be studied, but these themes are representative of current changes and future challenges. Three articles look at the first theme: organisational evolution of production and operations in firms and networks. Karlsson's and Skold's article represent one further step in their efforts to characterise “the extraprise”. In the article, they advance the construction of a new framework, based on “the network perspective” by defining the formal elements which compose it and exploring the meaning of different types of relationships. The way in which “actors, resources and activities” are conceptualised extends the existing boundaries of analytical thinking in operations management and open new avenues for research, teaching and practice. The higher level of abstraction, an intrinsic feature of the framework, is associated to the increasing degree of complexity that characterises decisions related to strategy and implementation in the manufacturing and operations area, a feature that is expected to become more and more pervasive as time proceeds. Riis, Johansen, Englyst and Sorensen have also based their article on their previous work, which in this case is on “the interactive firm”. They advance new propositions on strategic roles of manufacturing and discuss why the configuration of strategic manufacturing roles, at the level of the network, will become a key issue and how the indirect strategic roles of manufacturing will become increasingly important. Additionally, by considering that value chains will become value webs, they predict that shifts in strategic manufacturing roles will look like a sequence of moves similar to a game of chess. Then, lastly under the first theme, Fleury and Fleury develop a conceptual framework for the study of production systems in general derived from field research in the telecommunications industry, here considered a prototype of the coming information society and knowledge economy. They propose a new typology of firms which, on certain dimensions, complements the propositions found in the other two articles. Their telecoms-based framework (TbF) comprises six types of companies characterised by distinct profiles of organisational competences, which interact according to specific patterns of relationships, thus creating distinct configurations of production networks. The second theme is addressed by Kyläheiko and SandstroÍm in their article “Strategic options based framework for management of dynamic capabilities in manufacturing firms”. They propose a new approach to strategic decision-making in markets characterised by turbulence and weak signals at the customer interface. Their framework for a manufacturing firm in the digital age leads to active asset selection (strategic investments in both tangible and intangible assets) and efficient orchestrating of the global value net in “thin” intangible asset markets. The framework consists of five steps based on Porter's five-forces model, the resources-based view, complemented by means of the concepts of strategic options and related flexibility issues. Thun, GroÍssler and Miczka's contribution to the third theme brings the human dimension to the debate regarding the future of manufacturing. Their article focuses on the challenges brought to management by the ageing of workers in Germany but, in the arguments that are raised, the future challenges associated to workers and work organisation in every production system become visible and relevant. An interesting point in the approach adopted by the authors is that not only the factual problems and solutions are taken into account but the perception of the managers is brought into the picture. China cannot be absent in the discussion of the future of manufacturing. Therefore, within the fourth theme, Vaidya, Bennett and Liu provide the evidence of the gradual improvement of Chinese companies in the medium and high-tech sectors, by using the revealed comparative advantage (RCA) analysis. The Chinese evolution is shown to be based on capabilities developed through combining international technology transfer and indigenous learning. The main implication for the Western companies is the need to take account of the accelerated rhythm of capability development in China. For other developing countries China's case provides lessons of great importance. Finally, under the fifth theme, Kuehnle's article: “Post mass production paradigm (PMPP) trajectories” provides a futuristic scenario of what is already around us and might become prevalent in the future. It takes a very intensive look at a whole set of dimensions that are affecting manufacturing now, and will influence manufacturing in the future, ranging from the application of ICT to the need for social transparency. In summary, this special issue of JMTM presents a brief, but undisputable, demonstration of the possible richness of manufacturing in the future. Indeed, we could even say that manufacturing has no future if we only stick to the past perspectives. Embracing the new is not easy. The new configurations of production systems, the distributed and complementary roles to be performed by distinct types of companies in diversified networked structures, leveraged by the new emergent technologies and associated the new challenges for managing people, are all themes that are carriers of the future. The Guest Editors of this special issue on the future of manufacturing are strongly convinced that their undertaking has been worthwhile.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Visual detection performance (d') is usually an accelerating function of stimulus contrast, which could imply a smooth, threshold-like nonlinearity in the sensory response. Alternatively, Pelli (1985 Journal of the Optical Society of America A 2 1508 - 1532) developed the 'uncertainty model' in which responses were linear with contrast, but the observer was uncertain about which of many noisy channels contained the signal. Such internal uncertainty effectively adds noise to weak signals, and predicts the nonlinear psychometric function. We re-examined these ideas by plotting psychometric functions (as z-scores) for two observers (SAW, PRM) with high precision. The task was to detect a single, vertical, blurred line at the fixation point, or identify its polarity (light vs dark). Detection of a known polarity was nearly linear for SAW but very nonlinear for PRM. Randomly interleaving light and dark trials reduced performance and rendered it non-linear for SAW, but had little effect for PRM. This occurred for both single-interval and 2AFC procedures. The whole pattern of results was well predicted by our Monte Carlo simulation of Pelli's model, with only two free parameters. SAW (highly practised) had very low uncertainty. PRM (with little prior practice) had much greater uncertainty, resulting in lower contrast sensitivity, nonlinear performance, and no effect of external (polarity) uncertainty. For SAW, identification was about v2 better than detection, implying statistically independent channels for stimuli of opposite polarity, rather than an opponent (light - dark) channel. These findings strongly suggest that noise and uncertainty, rather than sensory nonlinearity, limit visual detection.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

The transmission of weak signals through the visual system is limited by internal noise. Its level can be estimated by adding external noise, which increases the variance within the detecting mechanism, causing masking. But experiments with white noise fail to meet three predictions: (a) noise has too small an influence on the slope of the psychometric function, (b) masking occurs even when the noise sample is identical in each two-alternative forced-choice (2AFC) interval, and (c) double-pass consistency is too low. We show that much of the energy of 2D white noise masks extends well beyond the pass-band of plausible detecting mechanisms and that this suppresses signal activity. These problems are avoided by restricting the external noise energy to the target mechanisms by introducing a pedestal with a mean contrast of 0% and independent contrast jitter in each 2AFC interval (termed zero-dimensional [0D] noise). We compared the jitter condition to masking from 2D white noise in double-pass masking and (novel) contrast matching experiments. Zero-dimensional noise produced the strongest masking, greatest double-pass consistency, and no suppression of perceived contrast, consistent with a noisy ideal observer. Deviations from this behavior for 2D white noise were explained by cross-channel suppression with no need to appeal to induced internal noise or uncertainty. We conclude that (a) results from previous experiments using white pixel noise should be re-evaluated and (b) 0D noise provides a cleaner method for investigating internal variability than pixel noise. Ironically then, the best external noise stimulus does not look noisy.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

We have investigated information transmission in an array of threshold units that have signal-dependent noise and a common input signal. We demonstrate a phenomenon similar to stochastic resonance and suprathreshold stochastic resonance with additive noise and show that information transmission can be enhanced by a nonzero level of noise. By comparing system performance to one with additive noise we also demonstrate that the information transmission of weak signals is significantly better with signal-dependent noise. Indeed, information rates are not compromised even for arbitrary small input signals. Furthermore, by an appropriate selection of parameters, we observe that the information can be made to be (almost) independent of the level of the noise, thus providing a robust method of transmitting information in the presence of noise. These result could imply that the ability of hair cells to code and transmit sensory information in biological sensory systems is not limited by the level of signal-dependent noise. © 2007 The American Physical Society.