107 resultados para Retinal adaptation


Relevância:

20.00% 20.00%

Publicador:

Resumo:

We have investigated whether inkjet printing technology can be extended to print cells of the adult rat central nervous system (CNS), retinal ganglion cells (RGC) and glia, and the effects on survival and growth of these cells in culture, which is an important step in the development of tissue grafts for regenerative medicine, and may aid in the cure of blindness. We observed that RGC and glia can be successfully printed using a piezoelectric printer. Whilst inkjet printing reduced the cell population due to sedimentation within the printing system, imaging of the printhead nozzle, which is the area where the cells experience the greatest shear stress and rate, confirmed that there was no evidence of destruction or even significant distortion of the cells during jet ejection and drop formation. Importantly, the viability of the cells was not affected by the printing process. When we cultured the same number of printed and non-printed RGC/glial cells, there was no significant difference in cell survival and RGC neurite outgrowth. In addition, use of a glial substrate significantly increased RGC neurite outgrowth, and this effect was retained when the cells had been printed. In conclusion, printing of RGC and glia using a piezoelectric printhead does not adversely affect viability and survival/growth of the cells in culture. Importantly, printed glial cells retain their growth-promoting properties when used as a substrate, opening new avenues for printed CNS grafts in regenerative medicine.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Adaptation to speaker and environment changes is an essential part of current automatic speech recognition (ASR) systems. In recent years the use of multi-layer percpetrons (MLPs) has become increasingly common in ASR systems. A standard approach to handling speaker differences when using MLPs is to apply a global speaker-specific constrained MLLR (CMLLR) transform to the features prior to training or using the MLP. This paper considers the situation when there are both speaker and channel, communication link, differences in the data. A more powerful transform, front-end CMLLR (FE-CMLLR), is applied to the inputs to the MLP to represent the channel differences. Though global, these FE-CMLLR transforms vary from time-instance to time-instance. Experiments on a channel distorted dialect Arabic conversational speech recognition task indicates the usefulness of adapting MLP features using both CMLLR and FE-CMLLR transforms. © 2013 IEEE.