996 resultados para Multi-pathway
Resumo:
The tumor suppressor PTEN antagonizes phosphatidylinositol 3-kinase (PI3K), which contributes to tumorigenesis in many cancer types. While PTEN mutations occur in some melanomas, their precise mechanistic consequences have yet to be elucidated. We sought to identify novel downstream effectors of PI3K using a combination of genomic and functional tests. Microarray analysis of 53 melanoma cell lines identified 610 genes differentially expressed (P<0.05) between wild-type lines and those with PTEN aberrations. Many of these genes are known to be involved in the PI3K pathway and other signaling pathways influenced by PTEN. Validation of differential gene expression by qRT-PCR was performed in the original 53 cell lines and an independent set of 18 melanoma lines with known PTEN status. Osteopontin (OPN), a secreted glycophosphoprotein that contributes to tumor progression, was more abundant at both the mRNA and protein level in PTEN mutants. The inverse correlation between OPN and PTEN expression was validated (P<0.02) by immunohistochemistry using melanoma tissue microarrays. Finally, treatment of cell lines with the PI3K inhibitor LY294002 caused a reduction in expression of OPN. These data indicate that OPN acts downstream of PI3K in melanoma and provides insight into how PTEN loss contributes to melanoma development.
Resumo:
Investigates the use of temporal lip information, in conjunction with speech information, for robust, text-dependent speaker identification. We propose that significant speaker-dependent information can be obtained from moving lips, enabling speaker recognition systems to be highly robust in the presence of noise. The fusion structure for the audio and visual information is based around the use of multi-stream hidden Markov models (MSHMM), with audio and visual features forming two independent data streams. Recent work with multi-modal MSHMMs has been performed successfully for the task of speech recognition. The use of temporal lip information for speaker identification has been performed previously (T.J. Wark et al., 1998), however this has been restricted to output fusion via single-stream HMMs. We present an extension to this previous work, and show that a MSHMM is a valid structure for multi-modal speaker identification