323 resultados para Specificity
Resumo:
PURPOSE: The prevalence of anaplastic lymphoma kinase (ALK) gene fusion (ALK positivity) in early-stage non-small-cell lung cancer (NSCLC) varies by population examined and detection method used. The Lungscape ALK project was designed to address the prevalence and prognostic impact of ALK positivity in resected lung adenocarcinoma in a primarily European population. METHODS: Analysis of ALK status was performed by immunohistochemistry (IHC) and fluorescent in situ hybridization (FISH) in tissue sections of 1,281 patients with adenocarcinoma in the European Thoracic Oncology Platform Lungscape iBiobank. Positive patients were matched with negative patients in a 1:2 ratio, both for IHC and for FISH testing. Testing was performed in 16 participating centers, using the same protocol after passing external quality assessment. RESULTS: Positive ALK IHC staining was present in 80 patients (prevalence of 6.2%; 95% CI, 4.9% to 7.6%). Of these, 28 patients were ALK FISH positive, corresponding to a lower bound for the prevalence of FISH positivity of 2.2%. FISH specificity was 100%, and FISH sensitivity was 35.0% (95% CI, 24.7% to 46.5%), with a sensitivity value of 81.3% (95% CI, 63.6% to 92.8%) for IHC 2+/3+ patients. The hazard of death for FISH-positive patients was lower than for IHC-negative patients (P = .022). Multivariable models, adjusted for patient, tumor, and treatment characteristics, and matched cohort analysis confirmed that ALK FISH positivity is a predictor for better overall survival (OS). CONCLUSION: In this large cohort of surgically resected lung adenocarcinomas, the prevalence of ALK positivity was 6.2% using IHC and at least 2.2% using FISH. A screening strategy based on IHC or H-score could be envisaged. ALK positivity (by either IHC or FISH) was related to better OS.
Resumo:
Biomolecules are chemical compounds found in living organisms which are the building blocks of life and perform important functions. Fluctuation from the normal concentration of these biomolecules in living system leads to several disorders. Thus the exact determination of them in human fluids is essential in the clinical point of view. High performance liquid chromatography, flow injection analysis, capillary electrophoresis, fluorimetry, spectrophotometry, electrochemical and chemiluminescence techniques were usually used for the determination of biologically important molecules. Among these techniques, electrochemical determination of biomolecules has several advantages over other methods viz., simplicity, selectivity and sensitivity. In the past two decades, electrodes modified with polymer films, self-assembled monolayers containing different functional groups and carbon paste have been used as electrochemical sensors. But in recent years, nanomaterials based electrochemical sensors play an important role in the improvement of public health because of its rapid detection, high sensitivity and specificity in clinical diagnostics. To date gold nanoparticles (AuNPs) have received arousing attention mainly due to their fascinating electronic and optical properties as a consequence of their reduced dimensions. These unique properties of AuNPs make them as an ideal candidate for the immobilization of enzymes for biosensing. Further, the electrochemical properties of AuNPs reveal that they exhibit interesting properties by enhancing the electrode conductivity, facilitating electron transfer and improving the detection limit of biomolecules. In this chapter, we summarized the different strategies used for the attachment of AuNPs on electrode surfaces and highlighted the electrochemical determination of glucose, ascorbic acid (AA), uric acid (UA) and dopamine derivatives using the AuNPs modified electrodes.
Resumo:
Wi-Fi is a commonly available source of localization information in urban environments but is challenging to integrate into conventional mapping architectures. Current state of the art probabilistic Wi-Fi SLAM algorithms are limited by spatial resolution and an inability to remove the accumulation of rotational error, inherent limitations of the Wi-Fi architecture. In this paper we leverage the low quality sensory requirements and coarse metric properties of RatSLAM to localize using Wi-Fi fingerprints. To further improve performance, we present a novel sensor fusion technique that integrates camera and Wi-Fi to improve localization specificity, and use compass sensor data to remove orientation drift. We evaluate the algorithms in diverse real world indoor and outdoor environments, including an office floor, university campus and a visually aliased circular building loop. The algorithms produce topologically correct maps that are superior to those produced using only a single sensor modality.
Resumo:
Background: Pediatric nutrition risk screening tools are not routinely implemented throughout many hospitals, despite prevalence studies demonstrating malnutrition is common in hospitalized children. Existing tools lack the simplicity of those used to assess nutrition risk in the adult population. This study reports the accuracy of a new, quick, and simple pediatric nutrition screening tool (PNST) designed to be used for pediatric inpatients. Materials and Methods: The pediatric Subjective Global Nutrition Assessment (SGNA) and anthropometric measures were used to develop and assess the validity of 4 simple nutrition screening questions comprising the PNST. Participants were pediatric inpatients in 2 tertiary pediatric hospitals and 1 regional hospital. Results: Two affirmative answers to the PNST questions were found to maximize the specificity and sensitivity to the pediatric SGNA and body mass index (BMI) z scores for malnutrition in 295 patients. The PNST identified 37.6% of patients as being at nutrition risk, whereas the pediatric SGNA identified 34.2%. The sensitivity and specificity of the PNST compared with the pediatric SGNA were 77.8% and 82.1%, respectively. The sensitivity of the PNST at detecting patients with a BMI z score of less than -2 was 89.3%, and the specificity was 66.2%. Both the PNST and pediatric SGNA were relatively poor at detecting patients who were stunted or overweight, with the sensitivity and specificity being less than 69%. Conclusion: The PNST provides a sensitive, valid, and simpler alternative to existing pediatric nutrition screening tools such as Screening Tool for the Assessment of Malnutrition in Pediatrics (STAMP), Screening Tool Risk on Nutritional status and Growth (STRONGkids), and Paediatric Yorkhill Malnutrition Score (PYMS) to ensure the early detection of hospitalized children at nutrition risk.
Resumo:
An early molecular response to DNA double-strand breaks (DSBs) is phosphorylation of the Ser-139 residue within the terminal SQEY motif of the histone H2AX1,2. This phosphorylation of H2AX is mediated by the phosphatidyl-inosito 3-kinase (PI3K) family of proteins, ataxia telangiectasia mutated (ATM), DNA-protein kinase catalytic subunit and ATM and RAD3-related (ATR)3. The phosphorylated form of H2AX, referred to as γH2AX, spreads to adjacent regions of chromatin from the site of the DSB, forming discrete foci, which are easily visualized by immunofluorecence microscopy3. Analysis and quantitation of γH2AX foci has been widely used to evaluate DSB formation and repair, particularly in response to ionizing radiation and for evaluating the efficacy of various radiation modifying compounds and cytotoxic compounds Given the exquisite specificity and sensitivity of this de novo marker of DSBs, it has provided new insights into the processes of DNA damage and repair in the context of chromatin. For example, in radiation biology the central paradigm is that the nuclear DNA is the critical target with respect to radiation sensitivity. Indeed, the general consensus in the field has largely been to view chromatin as a homogeneous template for DNA damage and repair. However, with the use of γH2AX as molecular marker of DSBs, a disparity in γ-irradiation-induced γH2AX foci formation in euchromatin and heterochromatin has been observed5-7. Recently, we used a panel of antibodies to either mono-, di- or tri- methylated histone H3 at lysine 9 (H3K9me1, H3K9me2, H3K9me3) which are epigenetic imprints of constitutive heterochromatin and transcriptional silencing and lysine 4 (H3K4me1, H3K4me2, H3K4me3), which are tightly correlated actively transcribing euchromatic regions, to investigate the spatial distribution of γH2AX following ionizing radiation8. In accordance with the prevailing ideas regarding chromatin biology, our findings indicated a close correlation between γH2AX formation and active transcription9. Here we demonstrate our immunofluorescence method for detection and quantitation of γH2AX foci in non-adherent cells, with a particular focus on co-localization with other epigenetic markers, image analysis and 3Dmodeling.
Resumo:
DNA double-strand breaks (DSBs) are particularly lethal and genotoxic lesions, that can arise either by endogenous (physiological or pathological) processes or by exogenous factors, particularly ionizing radiation and radiomimetic compounds. Phosphorylation of the H2A histone variant, H2AX, at the serine-139 residue, in the highly conserved C-terminal SQEY motif, forming γH2AX, is an early response to DNA double-strand breaks1. This phosphorylation event is mediated by the phosphatidyl-inosito 3-kinase (PI3K) family of proteins, ataxia telangiectasia mutated (ATM), DNA-protein kinase catalytic subunit and ATM and RAD3-related (ATR)2. Overall, DSB induction results in the formation of discrete nuclear γH2AX foci which can be easily detected and quantitated by immunofluorescence microscopy2. Given the unique specificity and sensitivity of this marker, analysis of γH2AX foci has led to a wide range of applications in biomedical research, particularly in radiation biology and nuclear medicine. The quantitation of γH2AX foci has been most widely investigated in cell culture systems in the context of ionizing radiation-induced DSBs. Apart from cellular radiosensitivity, immunofluorescence based assays have also been used to evaluate the efficacy of radiation-modifying compounds. In addition, γH2AX has been used as a molecular marker to examine the efficacy of various DSB-inducing compounds and is recently being heralded as important marker of ageing and disease, particularly cancer3. Further, immunofluorescence-based methods have been adapted to suit detection and quantitation of γH2AX foci ex vivo and in vivo4,5. Here, we demonstrate a typical immunofluorescence method for detection and quantitation of γH2AX foci in mouse tissues.
Resumo:
BACKGROUND: Postural instability is one of the major complications found in stroke survivors. Parameterising the functional reach test (FRT) could be useful in clinical practice and basic research. OBJECTIVES: To analyse the reliability, sensitivity, and specificity in the FRT parameterisation using inertial sensors for recording kinematic variables in patients who have suffered a stroke. DESIGN: Cross-sectional study. While performing FRT, two inertial sensors were placed on the patient's back (lumbar and trunk). PARTICIPANTS: Five subjects over 65 who suffer from a stroke. MEASUREMENTS: FRT measures, lumbosacral/thoracic maximum angular displacement, maximum time of lumbosacral/thoracic angular displacement, time return initial position, and total time. Speed and acceleration of the movements were calculated indirectly. RESULTS: FRT measure is 12.75±2.06 cm. Intrasubject reliability values range from 0.829 (time to return initial position (lumbar sensor)) to 0.891 (lumbosacral maximum angular displacement). Intersubject reliability values range from 0.821 (time to return initial position (lumbar sensor)) to 0.883 (lumbosacral maximum angular displacement). FRT's reliability was 0.987 (0.983-0.992) and 0.983 (0.979-0.989) intersubject and intrasubject, respectively. CONCLUSION: The main conclusion could be that the inertial sensors are a tool with excellent reliability and validity in the parameterization of the FRT in people who have had a stroke.
Resumo:
AIM This paper presents a discussion on the application of a capability framework for advanced practice nursing standards/competencies. BACKGROUND There is acceptance that competencies are useful and necessary for definition and education of practice-based professions. Competencies have been described as appropriate for practice in stable environments with familiar problems. Increasingly competencies are being designed for use in the health sector for advanced practice such as the nurse practitioner role. Nurse practitioners work in environments and roles that are dynamic and unpredictable necessitating attributes and skills to practice at advanced and extended levels in both familiar and unfamiliar clinical situations. Capability has been described as the combination of skills, knowledge, values and self-esteem which enables individuals to manage change, be flexible and move beyond competency. DESIGN A discussion paper exploring 'capability' as a framework for advanced nursing practice standards. DATA SOURCES Data were sourced from electronic databases as described in the background section. IMPLICATIONS FOR NURSING As advanced practice nursing becomes more established and formalized, novel ways of teaching and assessing the practice of experienced clinicians beyond competency are imperative for the changing context of health services. CONCLUSION Leading researchers into capability in health care state that traditional education and training in health disciplines concentrates mainly on developing competence. To ensure that healthcare delivery keeps pace with increasing demand and a continuously changing context there is a need to embrace capability as a framework for advanced practice and education.
Resumo:
Objective To examine the clinical utility of the Cornell Scale for Depression in Dementia (CSDD) in nursing homes. Setting 14 nursing homes in Sydney and Brisbane, Australia. Participants 92 residents with a mean age of 85 years. Measurements Consenting residents were assessed by care staff for depression using the CSDD as part of their routine assessment. Specialist clinicians conducted assessment of depression using the Semi-structured Clinical Diagnostic Interview for DSM-IV-TR Axis I Disorders for residents without dementia or the Provisional Diagnostic Criteria for Depression in Alzheimer Disease for residents with dementia to establish expert clinical diagnoses of depression. The diagnostic performance of the staff completed CSDD was analyzed against expert diagnosis using receiver operating characteristic (ROC) curves. Results The CSDD showed low diagnostic accuracy, with areas under the ROC curve being 0.69, 0.68 and 0.70 for the total sample, residents with dementia and residents without dementia, respectively. At the standard CSDD cutoff score, the sensitivity and specificity were 71% and 59% for the total sample, 69% and 57% for residents with dementia, and 75% and 61% for residents without dementia. The Youden index (for optimizing cut-points) suggested different depression cutoff scores for residents with and without dementia. Conclusion When administered by nursing home staff the clinical utility of the CSDD is highly questionable in identifying depression. The complexity of the scale, the time required for collecting relevant information, and staff skills and knowledge of assessing depression in older people must be considered when using the CSDD in nursing homes.
Resumo:
It is a big challenge to guarantee the quality of discovered relevance features in text documents for describing user preferences because of large scale terms and data patterns. Most existing popular text mining and classification methods have adopted term-based approaches. However, they have all suffered from the problems of polysemy and synonymy. Over the years, there has been often held the hypothesis that pattern-based methods should perform better than term-based ones in describing user preferences; yet, how to effectively use large scale patterns remains a hard problem in text mining. To make a breakthrough in this challenging issue, this paper presents an innovative model for relevance feature discovery. It discovers both positive and negative patterns in text documents as higher level features and deploys them over low-level features (terms). It also classifies terms into categories and updates term weights based on their specificity and their distributions in patterns. Substantial experiments using this model on RCV1, TREC topics and Reuters-21578 show that the proposed model significantly outperforms both the state-of-the-art term-based methods and the pattern based methods.
Resumo:
A hippocampal-CA3 memory model was constructed with PGENESIS, a recently developed version of GENESIS that allows for distributed processing of a neural network simulation. A number of neural models of the human memory system have identified the CA3 region of the hippocampus as storing the declarative memory trace. However, computational models designed to assess the viability of the putative mechanisms of storage and retrieval have generally been too abstract to allow comparison with empirical data. Recent experimental evidence has shown that selective knock-out of NMDA receptors in the CA1 of mice leads to reduced stability of firing specificity in place cells. Here a similar reduction of stability of input specificity is demonstrated in a biologically plausible neural network model of the CA3 region, under conditions of Hebbian synaptic plasticity versus an absence of plasticity. The CA3 region is also commonly associated with seizure activity. Further simulations of the same model tested the response to continuously repeating versus randomized nonrepeating input patterns. Each paradigm delivered input of equal intensity and duration. Non-repeating input patterns elicited a greater pyramidal cell spike count. This suggests that repetitive versus non-repeating neocortical inpus has a quantitatively different effect on the hippocampus. This may be relevant to the production of independent epileptogenic zones and the process of encoding new memories.
Resumo:
High-throughput plasmid DNA (pDNA) manufacture is obstructed predominantly by the performance of conventional stationary phases. For this reason, the search for new materials for fast chromatographic separation of pDNA is ongoing. A poly(glycidyl methacrylate-co-ethylene glycol dimethacrylate) (GMA-EGDMA) monolithic material was synthesised via a thermal-free radical reaction, functionalised with different amino groups from urea, 2-chloro-N,N-diethylethylamine hydrochloride (DEAE-Cl) and ammonia in order to investigate their plasmid adsorption capacities. Physical characterisation of the monolithic polymer showed a macroporous polymer having a unimodal pore size distribution pivoted at 600 nm. Chromatographic characterisation of the functionalised polymers using pUC19 plasmid isolated from E. coli DH5α-pUC19 showed a maximum plasmid adsorption capacity of 18.73 mg pDNA/mL with a dissociation constant (KD) of 0.11 mg/mL for GMA-EGDMA/DEAE-Cl polymer. Studies on ligand leaching and degradation demonstrated the stability of GMA-EGDMA/DEAE-Cl after the functionalised polymers were contacted with 1.0 M NaOH, which is a model reagent for most 'cleaning in place' (CIP) systems. However, it is the economic advantage of an adsorbent material that makes it so attractive for commercial purification purposes. Economic evaluation of the performance of the functionalised polymers on the grounds of polymer cost (PC)/mg pDNA retained endorsed the suitability of GMA-EGDMA/DEAE-Cl polymer.
Resumo:
Every university in Australia has a set of policies that guide the institution in its educational practices, however, the policies are often developed in isolation to each other. Now imagine a space where policies are evidence-based, refined annually, cohesively interrelated, and meet stakeholders’ needs. Is this happenstance or the result of good planning? Culturally, Queensland University of Technology (QUT) is a risk-averse institution that takes pride in its financial solvency and is always keen to know “how are we going?” With a twenty-year history of annual reporting that assures the quality of course performance through multiple lines of evidence, QUT’s Learning and Teaching Unit went one step further and strategically aligned a suite of policies that take into consideration the needs of their stakeholders, collaborate with other areas across the institution and use multiple lines of evidence to inform curriculum decision-making. In QUT’s experience, strategic planning can lead to policy that is designed to meet stakeholders’ needs, not manage them; where decision-making is supported by evidence, not rhetoric; where all feedback is incorporated, not ignored; and where policies are cohesively interrelated, not isolated. While many may call this ‘policy nirvana’, QUT has positioned itself to demonstrate good educational practice through Reframe, its evaluation framework. In this case, best practice was achieved through the application of a theory of change and a design-led logic model that allows for transition to other institutions with different cultural specificity. The evaluation approach follows Seldin’s (2003) notion to offer depth and breadth to the evaluation framework along with Berk’s (2005) concept of multiple lines of evidence. In summary, this paper offers university executives, academics, planning and quality staff an opportunity to understand the critical steps that lead to strategic planning and design of evidence-based educational policy that positions a university for best practice in learning and teaching.
Resumo:
This study aimed to provide a detailed evaluation and comparison of a range of modulated beam evaluation metrics, in terms of their correlation with QA testing results and their variation between treatment sites, for a large number of treatments. Ten metrics including the modulation index (MI), fluence map complexity (FMC), modulation complexity score (MCS), mean aperture displacement (MAD) and small aperture score (SAS) were evaluated for 546 beams from 122 IMRT and VMAT treatment plans targeting the anus, rectum, endometrium, brain, head and neck and prostate. The calculated sets of metrics were evaluated in terms of their relationships to each other and their correlation with the results of electronic portal imaging based quality assurance (QA) evaluations of the treatment beams. Evaluation of the MI, MAD and SAS suggested that beams used in treatments of the anus, rectum, head and neck were more complex than the prostate and brain treatment beams. Seven of the ten beam complexity metrics were found to be strongly correlated with the results from QA testing of the IMRT beams (p < 0.00008). For example, Values of SAS (with MLC apertures narrower than 10 mm defined as “small”) less than 0.2 also identified QA passing IMRT beams with 100% specificity. However, few of the metrics are correlated with the results from QA testing of the VMAT beams, whether they were evaluated as whole 360◦ arcs or as 60◦ sub-arcs. Select evaluation of beam complexity metrics (at least MI, MCS and SAS) is therefore recommended, as an intermediate step in the IMRT QA chain. Such evaluation may also be useful as a means of periodically reviewing VMAT planning or optimiser performance.
Resumo:
The trans-activator of transcription (TAT) peptide is regarded as the “gold standard” for cell-penetrating peptides, capable of traversing a mammalian membrane passively into the cytosolic space. This characteristic has been exploited through conjugation of TAT for applications such as drug delivery. However, the process by which TAT achieves membrane penetration remains ambiguous and unresolved. Mechanistic details of TAT peptide action are revealed herein by using three complementary methods: quartz crystal microbalance with dissipation (QCM-D), scanning electrochemical microscopy (SECM) and atomic force microscopy (AFM). When combined, these three scales of measurement define that the membrane uptake of the TAT peptide is by trans-membrane insertion using a “worm-hole” pore that leads to ion permeability across the membrane layer. AFM data provided nanometre-scale visualisation of TAT punctuation using a mammalian-mimetic membrane bilayer. The TAT peptide does not show the same specificity towards a bacterial mimetic membrane and QCM-D and SECM showed that the TAT peptide demonstrates a disruptive action towards these membranes. This investigation supports the energy-independent uptake of the cationic TAT peptide and provides empirical data that clarify the mechanism by which the TAT peptide achieves its membrane activity. The novel use of these three biophysical techniques provides valuable insight into the mechanism for TAT peptide translocation, which is essential for improvements in the cellular delivery of TAT-conjugated cargoes including therapeutic agents required to target specific intracellular locations.