980 resultados para Sturm, Julia
Resumo:
In contrast to extensive studies on familial breast cancer, it is currently unclear whether defects in DNA double strand break (DSB) repair genes play a role in sporadic breast cancer development and progression. We performed analysis of immunohistochemistry in an independent cohort of 235 were sporadic breast tumours. This analysis suggested that RAD51 expression is increased during breast cancer progression and metastasis and an oncogenic role for RAD51 when deregulated. Subsequent knockdown of RAD51 repressed cancer cell migration in vitro and reduced primary tumor growth in a syngeneic mouse model in vivo. Loss of RAD51 also inhibited associated metastasis not only in syngeneic mice but human xenografts and changed the metastatic gene expression profile of cancer cells, consistent with inhibition of distant metastasis. This demonstrates for the first time a new function of RAD51 that may underlie the proclivity of patients with RAD51 overexpression to develop distant metastasis. RAD51 is a potential biomarker and attractive drug target for metastatic triple negative breast cancer, with the capability to extend the survival of patients, which is less than 6 months.
Resumo:
While genomics provide important information about the somatic genetic changes, and RNA transcript profiling can reveal important expression changes that correlate with outcome and response to therapy, it is the proteins that do the work in the cell. At a functional level, derangements within the proteome, driven by post-translational and epigenetic modifications, such as phosphorylation, is the cause of a vast majority of human diseases. Cancer, for instance, is a manifestation of deranged cellular protein molecular networks and cell signaling pathways that are based on genetic changes at the DNA level. Importantly, the protein pathways contain the drug targets in signaling networks that govern overall cellular survival, proliferation, invasion and cell death. Consequently, the promise of proteomics resides in the ability to extend analysis beyond correlation to causality. A critical gap in the information knowledge base of molecular profiling is an understanding of the ongoing activity of protein signaling in human tissue: what is activated and “in use” within the human body at any given point in time. To address this gap, we have invented a new technology, called reverse phase protein microarrays, that can generate a functional read-out of cell signaling networks or pathways for an individual patient obtained directly from a biopsy specimen. This “wiring diagram” can serve as the basis for both, selection of a therapy and patient stratification.
Resumo:
Cancer can be defined as a deregulation or hyperactivity in the ongoing network of intracellular and extracellular signaling events. Reverse phase protein microarray technology may offer a new opportunity to measure and profile these signaling pathways, providing data on post-translational phosphorylation events not obtainable by gene microarray analysis. Treatment of ovarian epithelial carcinoma almost always takes place in a metastatic setting since unfortunately the disease is often not detected until later stages. Thus, in addition to elucidation of the molecular network within a tumor specimen, critical questions are to what extent do signaling changes occur upon metastasis and are there common pathway elements that arise in the metastatic microenvironment. For individualized combinatorial therapy, ideal therapeutic selection based on proteomic mapping of phosphorylation end points may require evaluation of the patient's metastatic tissue. Extending these findings to the bedside will require the development of optimized protocols and reference standards. We have developed a reference standard based on a mixture of phosphorylated peptides to begin to address this challenge.
Resumo:
The ongoing challenge for ED leaders is to remain abreast of system-wide changes that impact on the day-to-day management of their departments. Changes to the funding model creates another layer of complexity and this introductory paper serves as the beginning of a discussion about the way in which EDs are funded and how this can and will impact on business decisions, models of care and resource allocation within Australian EDs. Furthermore it is evident that any funding model today will mature and change with time, and moves are afoot to refine and contextualise ED funding over the medium term. This perspective seeks to provide a basis of understanding for our current and future funding arrangements in Australian EDs.
Resumo:
Brain decoding of functional Magnetic Resonance Imaging data is a pattern analysis task that links brain activity patterns to the experimental conditions. Classifiers predict the neural states from the spatial and temporal pattern of brain activity extracted from multiple voxels in the functional images in a certain period of time. The prediction results offer insight into the nature of neural representations and cognitive mechanisms and the classification accuracy determines our confidence in understanding the relationship between brain activity and stimuli. In this paper, we compared the efficacy of three machine learning algorithms: neural network, support vector machines, and conditional random field to decode the visual stimuli or neural cognitive states from functional Magnetic Resonance data. Leave-one-out cross validation was performed to quantify the generalization accuracy of each algorithm on unseen data. The results indicated support vector machine and conditional random field have comparable performance and the potential of the latter is worthy of further investigation.
Resumo:
It is well established that the time to name target objects can be influenced by the presence of categorically related versus unrelated distractor items. A variety of paradigms have been developed to determine the level at which this semantic interference effect occurs in the speech production system. In this study, we investigated one of these tasks, the postcue naming paradigm, for the first time with fMRI. Previous behavioural studies using this paradigm have produced conflicting interpretations of the processing level at which the semantic interference effect takes place, ranging from pre- to post-lexical. Here we used fMRI with a sparse, event-related design to adjudicate between these competing explanations. We replicated the behavioural postcue naming effect for categorically related target/distractor pairs, and observed a corresponding increase in neuronal activation in the right lingual and fusiform gyri-regions previously associated with visual object processing and colour-form integration. We interpret these findings as being consistent with an account that places the semantic interference effect in the postcue paradigm at a processing level involving integration of object attributes in short-term memory.
Resumo:
In this paper we provide normative data along multiple cognitive and affective variable dimensions for a set of 110 sounds, including living and manmade stimuli. Environmental sounds are being increasingly utilized as stimuli in the cognitive, neuropsychological and neuroimaging fields, yet there is no comprehensive set of normative information for these type of stimuli available for use across these experimental domains. Experiment 1 collected data from 162 participants in an on-line questionnaire, which included measures of identification and categorization as well as cognitive and affective variables. A subsequent experiment collected response times to these sounds. Sounds were normalized to the same length (1 second) in order to maximize usage across multiple paradigms and experimental fields. These sounds can be freely downloaded for use, and all response data have also been made available in order that researchers can choose one or many of the cognitive and affective dimensions along which they would like to control their stimuli. Our hope is that the availability of such information will assist researchers in the fields of cognitive and clinical psychology and the neuroimaging community in choosing well-controlled environmental sound stimuli, and allow comparison across multiple studies.
Resumo:
Previous behavioral studies reported a robust effect of increased naming latencies when objects to be named were blocked within semantic category, compared to items blocked between category. This semantic context effect has been attributed to various mechanisms including inhibition or excitation of lexico-semantic representations and incremental learning of associations between semantic features and names, and is hypothesized to increase demands on verbal self-monitoring during speech production. Objects within categories also share many visual structural features, introducing a potential confound when interpreting the level at which the context effect might occur. Consistent with previous findings, we report a significant increase in response latencies when naming categorically related objects within blocks, an effect associated with increased perfusion fMRI signal bilaterally in the hippocampus and in the left middle to posterior superior temporal cortex. No perfusion changes were observed in the middle section of the left middle temporal cortex, a region associated with retrieval of lexical-semantic information in previous object naming studies. Although a manipulation of visual feature similarity did not influence naming latencies, we observed perfusion increases in the perirhinal cortex for naming objects with similar visual features that interacted with the semantic context in which objects were named. These results provide support for the view that the semantic context effect in object naming occurs due to an incremental learning mechanism, and involves increased demands on verbal self-monitoring.
Resumo:
Semantic knowledge is supported by a widely distributed neuronal network, with differential patterns of activation depending upon experimental stimulus or task demands. Despite a wide body of knowledge on semantic object processing from the visual modality, the response of this semantic network to environmental sounds remains relatively unknown. Here, we used fMRI to investigate how access to different conceptual attributes from environmental sound input modulates this semantic network. Using a range of living and manmade sounds, we scanned participants whilst they carried out an object attribute verification task. Specifically, we tested visual perceptual, encyclopedic, and categorical attributes about living and manmade objects relative to a high-level auditory perceptual baseline to investigate the differential patterns of response to these contrasting types of object-related attributes, whilst keeping stimulus input constant across conditions. Within the bilateral distributed network engaged for processing environmental sounds across all conditions, we report here a highly significant dissociation within the left hemisphere between the processing of visual perceptual and encyclopedic attributes of objects.
Resumo:
This paper investigates how neuronal activation for naming photographs of objects is influenced by the addition of appropriate colour or sound. Behaviourally, both colour and sound are known to facilitate object recognition from visual form. However, previous functional imaging studies have shown inconsistent effects. For example, the addition of appropriate colour has been shown to reduce antero-medial temporal activation whereas the addition of sound has been shown to increase posterior superior temporal activation. Here we compared the effect of adding colour or sound cues in the same experiment. We found that the addition of either the appropriate colour or sound increased activation for naming photographs of objects in bilateral occipital regions and the right anterior fusiform. Moreover, the addition of colour reduced left antero-medial temporal activation but this effect was not observed for the addition of object sound. We propose that activation in bilateral occipital and right fusiform areas precedes the integration of visual form with either its colour or associated sound. In contrast, left antero-medial temporal activation is reduced because object recognition is facilitated after colour and form have been integrated.
Resumo:
In this study we investigate previous claims that a region in the left posterior superior temporal sulcus (pSTS) is more activated by audiovisual than unimodal processing. First, we compare audiovisual to visual-visual and auditory-auditory conceptual matching using auditory or visual object names that are paired with pictures of objects or their environmental sounds. Second, we compare congruent and incongruent audiovisual trials when presentation is simultaneous or sequential. Third, we compare audiovisual stimuli that are either verbal (auditory and visual words) or nonverbal (pictures of objects and their associated sounds). The results demonstrate that, when task, attention, and stimuli are controlled, pSTS activation for audiovisual conceptual matching is 1) identical to that observed for intramodal conceptual matching, 2) greater for incongruent than congruent trials when auditory and visual stimuli are simultaneously presented, and 3) identical for verbal and nonverbal stimuli. These results are not consistent with previous claims that pSTS activation reflects the active formation of an integrated audiovisual representation. After a discussion of the stimulus and task factors that modulate activation, we conclude that, when stimulus input, task, and attention are controlled, pSTS is part of a distributed set of regions involved in conceptual matching, irrespective of whether the stimuli are audiovisual, auditory-auditory or visual-visual.
Resumo:
This fMRI study investigates how audiovisual integration differs for verbal stimuli that can be matched at a phonological level and nonverbal stimuli that can be matched at a semantic level. Subjects were presented simultaneously with one visual and one auditory stimulus and were instructed to decide whether these stimuli referred to the same object or not. Verbal stimuli were simultaneously presented spoken and written object names, and nonverbal stimuli were photographs of objects simultaneously presented with naturally occurring object sounds. Stimulus differences were controlled by including two further conditions that paired photographs of objects with spoken words and object sounds with written words. Verbal matching, relative to all other conditions, increased activation in a region of the left superior temporal sulcus that has previously been associated with phonological processing. Nonverbal matching, relative to all other conditions, increased activation in a right fusiform region that has previously been associated with structural and conceptual object processing. Thus, we demonstrate how brain activation for audiovisual integration depends on the verbal content of the stimuli, even when stimulus and task processing differences are controlled.
Resumo:
Neuropsychological tests requiring patients to find a path through a maze can be used to assess visuospatial memory performance in temporal lobe pathology, particularly in the hippocampus. Alternatively, they have been used as a task sensitive to executive function in patients with frontal lobe damage. We measured performance on the Austin Maze in patients with unilateral left and right temporal lobe epilepsy (TLE), with and without hippocampal sclerosis, compared to healthy controls. Performance was correlated with a number of other neuropsychological tests to identify the cognitive components that may be associated with poor Austin Maze performance. Patients with right TLE were significantly impaired on the Austin Maze task relative to patients with left TLE and controls, and error scores correlated with their performance on the Block Design task. The performance of patients with left TLE was also impaired relative to controls; however, errors correlated with performance on tests of executive function and delayed recall. The presence of hippocampal sclerosis did not have an impact on maze performance. A discriminant function analysis indicated that the Austin Maze alone correctly classified 73.5% of patients as having right TLE. In summary, impaired performance on the Austin Maze task is more suggestive of right than left TLE; however, impaired performance on this visuospatial task does not necessarily involve the hippocampus. The relationship of the Austin Maze task with other neuropsychological tests suggests that differential cognitive components may underlie performance decrements in right versus left TLE.
Resumo:
To identify and categorize complex stimuli such as familiar objects or speech, the human brain integrates information that is abstracted at multiple levels from its sensory inputs. Using cross-modal priming for spoken words and sounds, this functional magnetic resonance imaging study identified 3 distinct classes of visuoauditory incongruency effects: visuoauditory incongruency effects were selective for 1) spoken words in the left superior temporal sulcus (STS), 2) environmental sounds in the left angular gyrus (AG), and 3) both words and sounds in the lateral and medial prefrontal cortices (IFS/mPFC). From a cognitive perspective, these incongruency effects suggest that prior visual information influences the neural processes underlying speech and sound recognition at multiple levels, with the STS being involved in phonological, AG in semantic, and mPFC/IFS in higher conceptual processing. In terms of neural mechanisms, effective connectivity analyses (dynamic causal modeling) suggest that these incongruency effects may emerge via greater bottom-up effects from early auditory regions to intermediate multisensory integration areas (i.e., STS and AG). This is consistent with a predictive coding perspective on hierarchical Bayesian inference in the cortex where the domain of the prediction error (phonological vs. semantic) determines its regional expression (middle temporal gyrus/STS vs. AG/intraparietal sulcus).
Resumo:
Previous studies have found that the lateral posterior fusiform gyri respond more robustly to pictures of animals than pictures of manmade objects and suggested that these regions encode the visual properties characteristic of animals. We suggest that such effects actually reflect processing demands arising when items with similar representations must be finely discriminated. In a positron emission tomography (PET) study of category verification with colored photographs of animals and vehicles, there was robust animal-specific activation in the lateral posterior fusiform gyri when stimuli were categorized at an intermediate level of specificity (e.g., dog or car). However, when the same photographs were categorized at a more specific level (e.g., Labrador or BMW), these regions responded equally strongly to animals and vehicles. We conclude that the lateral posterior fusiform does not encode domain-specific representations of animals or visual properties characteristic of animals. Instead, these regions are strongly activated whenever an item must be discriminated from many close visual or semantic competitors. Apparent category effects arise because, at an intermediate level of specificity, animals have more visual and semantic competitors than do artifacts.