995 resultados para non-interceptive diagnostics
Resumo:
An important field of application of lasers is biomedical optics. Here, they offer great utility for diagnosis, therapy and surgery. For the development of novel methods of laser-based biomedical diagnostics careful study of light propagation in biological tissues is necessary to enhance our understanding of the optical measurements undertaken, increase research and development capacity and the diagnostic reliability of optical technologies. Ultimately, fulfilling these requirements will increase uptake in clinical applications of laser based diagnostics and therapeutics. To address these challenges informative biomarkers relevant to the biological and physiological function or disease state of the organism must be selected. These indicators are the results of the analysis of tissues and cells, such as blood. For non-invasive diagnostics peripheral blood, cells and tissue can potentially provide comprehensive information on the condition of the human organism. A detailed study of the light scattering and absorption characteristics can quickly detect physiological and morphological changes in the cells due to thermal, chemical, antibiotic treatments, etc [1-5]. The selection of a laser source to study the structure of biological particles also benefits from the fact that gross pathological changes are not induced and diagnostics make effective use of the monochromatic directional coherence properties of laser radiation.
Resumo:
Background MicroRNAs (miRNAs) are known to play an important role in cancer development by post-transcriptionally affecting the expression of critical genes. The aims of this study were two-fold: (i) to develop a robust method to isolate miRNAs from small volumes of saliva and (ii) to develop a panel of saliva-based diagnostic biomarkers for the detection of head and neck squamous cell carcinoma (HNSCC). Methods Five differentially expressed miRNAs were selected from miScript™ miRNA microarray data generated using saliva from five HNSCC patients and five healthy controls. Their differential expression was subsequently confirmed by RT-qPCR using saliva samples from healthy controls (n = 56) and HNSCC patients (n = 56). These samples were divided into two different cohorts, i.e., a first confirmatory cohort (n = 21) and a second independent validation cohort (n = 35), to narrow down the miRNA diagnostic panel to three miRNAs: miR-9, miR-134 and miR-191. This diagnostic panel was independently validated using HNSCC miRNA expression data from The Cancer Genome Atlas (TCGA), encompassing 334 tumours and 39 adjacent normal tissues. Receiver operating characteristic (ROC) curve analysis was performed to assess the diagnostic capacity of the panel. Results On average 60 ng/μL miRNA was isolated from 200 μL of saliva. Overall a good correlation was observed between the microarray data and the RT-qPCR data. We found that miR-9 (P <0.0001), miR-134 (P <0.0001) and miR-191 (P <0.001) were differentially expressed between saliva from HNSCC patients and healthy controls, and that these miRNAs provided a good discriminative capacity with area under the curve (AUC) values of 0.85 (P <0.0001), 0.74 (P < 0.001) and 0.98 (P < 0.0001), respectively. In addition, we found that the salivary miRNA data showed a good correlation with the TCGA miRNA data, thereby providing an independent validation. Conclusions We show that we have developed a reliable method to isolate miRNAs from small volumes of saliva, and that the saliva-derived miRNAs miR-9, miR-134 and miR-191 may serve as novel biomarkers to reliably detect HNSCC. © 2014 International Society for Cellular Oncology.
Resumo:
本文全面地论述了在HIRFL束流诊断系统中,利用测量束流相宽和中心相位的圆柱形容性感应式相位探针,进行非拦截式束流强度测量方法的研制开发及这一系统的软硬件结构。论文阐述了目前国际国内加速器以及HIRFL的束流诊断技术的发展现状和本论文的研究工作及其意义;简要概述了常见的几种束流强度的测量方法,详细地介绍了相位探针感应束流信号原理和几种束团开矿感应信号的模拟计算,并从该感应信号中撮束流强度等信息;介绍了测量仪表及其测量原理和性能;同时阐述了系统软件的设计方法,并介绍了测控系统中的软件模块设计以及虚拟仪器系统技术在本测量系统中的应用。通过GPIB接口总线技术的应用,编写基于Windows操作系统的测量控制软件,实现了对几种可编程测量方法的实验结果分析和作者对未来HIRFL束流诊断系统进一步工作的设想。该课题的研究取得了较为满意的结果,目前系统已经投入HIRFL调束运行中,与HIRFL中原有的法拉第筒测量束流强度的系统相比,该系统具有操作方便、可以在不阻挡束流的工作状态下测量和控制,在线获取参数等优点。
Resumo:
随着加速器束流诊断技术的不断发展,非拦截式束流测量方法及弱束流诊断技术在加速器领域被广泛应用。为了跟踪国际上加速器技术研究前沿,配合HIRFL改造,以满足CSR大科学工程对HIRFL束流品质及调束效率提出的更高要求,本论文开展了弱束流测量方法的研究和一种新型的非拦截式束流位置及剖面探测装置的研制。论文中简要概述了国际国内加速器束流诊断技术的发展现状以及非拦截式弱束流测量技术在加速器中的应用,调研了国际上研制的利用剩余气体获取束流参数的各种束诊设备,为开展本课题的研究奠定了理论基础;论文中重点论述了研制剩余气体束流剖面探测系统的工作原理,利用平行板电极形成的均匀电场收集束流与剩余气体分子碰撞时产生的剩余气体正离子,通过微通道板与连续型电阻阳极构成的位置灵敏探测器将剩余气体正离子放大、读出,最终获得束流位置及剖面等参数;详细介绍了剩余气体束流剖面测量系统的设计,包括机械装置和信号获取系统,机械部分主要由真空测量室、平行板式高压电极和位置灵敏探测器组成,获取系统由电荷灵敏前置放大器、主放大器、加法器、位置灵敏分析器、计算机多道分析系统及符合电路组成;最后给出了离线和在线测试实验分析结果。本论文主要基于剩余气体电离理论,以非拦截式束流探测技术为主导思想,从理论上分析了利用剩余气体测量束流位置及束流剖面的可行性,首次在国内加速器领域研制出剩余气体束流剖面探测装置,并将其应用于重离子加速器实验,初步实验结果证明系统设计结构合理,测量灵敏度,位置分辨及线性基本达到了系统设计要求。本课题的研究无论是对于HIRFL非拦截诊断技术的扩展,还是对于HIRFL-CSR束诊系统的发展都具有深远意义。
Resumo:
Idealised convection-permitting simulations are used to quantify the impact of embedded convection on the precipitation generated by moist flow over midlatitude mountain ridges. A broad range of mountain dimensions and moist stabilities are considered to encompass a spectrum of physically plausible flows. The simulations reveal that convection only enhances orographic precipitation in cap clouds that are otherwise unable to efficiently convert cloud condensate into precipitate. For tall and wide mountains (e.g. the Washington Cascades or the southern Andes), precipitate forms efficiently through vapour deposition and collection, even in the absence of embedded convection. When embedded convection develops in such clouds, it produces competing effects (enhanced condensation in updraughts and enhanced evaporation through turbulent mixing and compensating subsidence) that cancel to yield little net change in precipitation. By contrast, convection strongly enhances precipitation over short and narrow mountains (e.g. the UK Pennines or the Oregon Coastal Range) where precipitation formation is otherwise highly inefficient. Although cancellation between increased condensation and evaporation still occurs, the enhanced precipitation formation within the convective updraughts leads to a net increase in precipitation efficiency. The simulations are physically interpreted through non-dimensional diagnostics and relevant time-scales that govern advective, microphysical, and convective processes.
Resumo:
An experimental and numerical study of turbulent fire suppression is presented. For this work, a novel and canonical facility has been developed, featuring a buoyant, turbulent, methane or propane-fueled diffusion flame suppressed via either nitrogen dilution of the oxidizer or application of a fine water mist. Flames are stabilized on a slot burner surrounded by a co-flowing oxidizer, which allows controlled delivery of either suppressant to achieve a range of conditions from complete combustion through partial and total flame quenching. A minimal supply of pure oxygen is optionally applied along the burner to provide a strengthened flame base that resists liftoff extinction and permits the study of substantially weakened turbulent flames. The carefully designed facility features well-characterized inlet and boundary conditions that are especially amenable to numerical simulation. Non-intrusive diagnostics provide detailed measurements of suppression behavior, yielding insight into the governing suppression processes, and aiding the development and validation of advanced suppression models. Diagnostics include oxidizer composition analysis to determine suppression potential, flame imaging to quantify visible flame structure, luminous and radiative emissions measurements to assess sooting propensity and heat losses, and species-based calorimetry to evaluate global heat release and combustion efficiency. The studied flames experience notable suppression effects, including transition in color from bright yellow to dim blue, expansion in flame height and structural intermittency, and reduction in radiative heat emissions. Still, measurements indicate that the combustion efficiency remains close to unity, and only near the extinction limit do the flames experience an abrupt transition from nearly complete combustion to total extinguishment. Measurements are compared with large eddy simulation results obtained using the Fire Dynamics Simulator, an open-source computational fluid dynamics software package. Comparisons of experimental and simulated results are used to evaluate the performance of available models in predicting fire suppression. Simulations in the present configuration highlight the issue of spurious reignition that is permitted by the classical eddy-dissipation concept for modeling turbulent combustion. To address this issue, simple treatments to prevent spurious reignition are developed and implemented. Simulations incorporating these treatments are shown to produce excellent agreement with the experimentally measured data, including the global combustion efficiency.
Resumo:
Diagnostics is based on the characterization of mechanical system condition and allows early detection of a possible fault. Signal processing is an approach widely used in diagnostics, since it allows directly characterizing the state of the system. Several types of advanced signal processing techniques have been proposed in the last decades and added to more conventional ones. Seldom, these techniques are able to consider non-stationary operations. Diagnostics of roller bearings is not an exception of this framework. In this paper, a new vibration signal processing tool, able to perform roller bearing diagnostics in whatever working condition and noise level, is developed on the basis of two data-adaptive techniques as Empirical Mode Decomposition (EMD), Minimum Entropy Deconvolution (MED), coupled by means of the mathematics related to the Hilbert transform. The effectiveness of the new signal processing tool is proven by means of experimental data measured in a test-rig that employs high power industrial size components.
Resumo:
Lo studio dell’intelligenza artificiale si pone come obiettivo la risoluzione di una classe di problemi che richiedono processi cognitivi difficilmente codificabili in un algoritmo per essere risolti. Il riconoscimento visivo di forme e figure, l’interpretazione di suoni, i giochi a conoscenza incompleta, fanno capo alla capacità umana di interpretare input parziali come se fossero completi, e di agire di conseguenza. Nel primo capitolo della presente tesi sarà costruito un semplice formalismo matematico per descrivere l’atto di compiere scelte. Il processo di “apprendimento” verrà descritto in termini della massimizzazione di una funzione di prestazione su di uno spazio di parametri per un ansatz di una funzione da uno spazio vettoriale ad un insieme finito e discreto di scelte, tramite un set di addestramento che descrive degli esempi di scelte corrette da riprodurre. Saranno analizzate, alla luce di questo formalismo, alcune delle più diffuse tecniche di artificial intelligence, e saranno evidenziate alcune problematiche derivanti dall’uso di queste tecniche. Nel secondo capitolo lo stesso formalismo verrà applicato ad una ridefinizione meno intuitiva ma più funzionale di funzione di prestazione che permetterà, per un ansatz lineare, la formulazione esplicita di un set di equazioni nelle componenti del vettore nello spazio dei parametri che individua il massimo assoluto della funzione di prestazione. La soluzione di questo set di equazioni sarà trattata grazie al teorema delle contrazioni. Una naturale generalizzazione polinomiale verrà inoltre mostrata. Nel terzo capitolo verranno studiati più nel dettaglio alcuni esempi a cui quanto ricavato nel secondo capitolo può essere applicato. Verrà introdotto il concetto di grado intrinseco di un problema. Verranno inoltre discusse alcuni accorgimenti prestazionali, quali l’eliminazione degli zeri, la precomputazione analitica, il fingerprinting e il riordino delle componenti per lo sviluppo parziale di prodotti scalari ad alta dimensionalità. Verranno infine introdotti i problemi a scelta unica, ossia quella classe di problemi per cui è possibile disporre di un set di addestramento solo per una scelta. Nel quarto capitolo verrà discusso più in dettaglio un esempio di applicazione nel campo della diagnostica medica per immagini, in particolare verrà trattato il problema della computer aided detection per il rilevamento di microcalcificazioni nelle mammografie.
Resumo:
The frequency of PRRSV corresponding to live vaccines and wild-type was determined in 902 pigs from North-Western Germany submitted for post-mortem examination. Overall, 18.5% of the samples were positive for the EU wild-type virus. EU genotype vaccine virus was detected in 1.3% and the NA genotype vaccine virus in 8.9% of all samples. The detection of the EU vaccine was significantly higher in pigs vaccinated with the corresponding vaccine (OR=9.4). Pigs vaccinated with NA genotype had significantly higher detection chances for the corresponding vaccine virus when compared to non-vaccinated animals (OR=3.34) animals, however, NA vaccine was also frequently detected in non-vaccinated pigs. Concluding, the dynamics of NA genotype vaccine and EU wild-type virus corresponds with studies on PRRSV spread in endemically infected herds. The potential of spontaneous spread of the NA genotype vaccine should be considered in the planning of eradication programs.
Resumo:
Cyclostationary models for the diagnostic signals measured on faulty rotating machineries have proved to be successful in many laboratory tests and industrial applications. The squared envelope spectrum has been pointed out as the most efficient indicator for the assessment of second order cyclostationary symptoms of damages, which are typical, for instance, of rolling element bearing faults. In an attempt to foster the spread of rotating machinery diagnostics, the current trend in the field is to reach higher levels of automation of the condition monitoring systems. For this purpose, statistical tests for the presence of cyclostationarity have been proposed during the last years. The statistical thresholds proposed in the past for the identification of cyclostationary components have been obtained under the hypothesis of having a white noise signal when the component is healthy. This need, coupled with the non-white nature of the real signals implies the necessity of pre-whitening or filtering the signal in optimal narrow-bands, increasing the complexity of the algorithm and the risk of losing diagnostic information or introducing biases on the result. In this paper, the authors introduce an original analytical derivation of the statistical tests for cyclostationarity in the squared envelope spectrum, dropping the hypothesis of white noise from the beginning. The effect of first order and second order cyclostationary components on the distribution of the squared envelope spectrum will be quantified and the effectiveness of the newly proposed threshold verified, providing a sound theoretical basis and a practical starting point for efficient automated diagnostics of machine components such as rolling element bearings. The analytical results will be verified by means of numerical simulations and by using experimental vibration data of rolling element bearings.
Resumo:
In the field of rolling element bearing diagnostics, envelope analysis has gained in the last years a leading role among the different digital signal processing techniques. The original constraint of constant operating speed has been relaxed thanks to the combination of this technique with the computed order tracking, able to resample signals at constant angular increments. In this way, the field of application of this technique has been extended to cases in which small speed fluctuations occur, maintaining high effectiveness and efficiency. In order to make this algorithm suitable to all industrial applications, the constraint on speed has to be removed completely. In fact, in many applications, the coincidence of high bearing loads, and therefore high diagnostic capability, with acceleration-deceleration phases represents a further incentive in this direction. This chapter presents a procedure for the application of envelope analysis to speed transients. The effect of load variation on the proposed technique will be also qualitatively addressed.
Resumo:
The majority of non-small cell lung cancer (NSCLC) patients present with advanced disease and with a 5 year survival rate of <15% for these patients, treatment outcomes are considered extremely disappointing. Standard chemotherapy regimens provide some improvement to ~40% of patients. However, intrinsic and acquired chemoresistance are a significant problem and hinder sustained long term benefits of such treatments. Advances in proteomic and genomic profiling have increased our understanding of the aberrant molecular mechanisms that are driving an individual's tumour. The increased sensitivity of these technologies has enabled molecular profiling at the stage of initial biopsy thus paving the way for a more personalised approach to the treatment of cancer patients. Improvements in diagnostics together with a wave of new targeted small molecule inhibitors and monoclonal antibodies have revolutionised the treatment of cancer. To date there are essentially three targeted agents approved for clinical use in NSCLC. The tyrosine kinase inhibitor (TKI) erlotinib, which targets the epidermal growth factor receptor (EGFR) TK domain, has proven to be an effective treatment strategy in patients who harbour activating mutations in the EGFR TK domain. Bevacizumab a monoclonal antibody targeting the vascular endothelial growth factor (VEGF) can improve survival, response rates, and progression-free survival when used in combination with chemotherapy. Crizotinib, a small-molecule drug, inhibits the tyrosine kinase activity of the echinoderm microtubule-associated protein-like 4 anaplastic lymphoma kinase (EML4-ALK) fusion protein, resulting in decreased tumour cell growth, migration, and invasiveness in patients with locally advanced or metastatic NSCLC. The clinical relevance of several other targeted agents are under investigation in distinct molecular subsets of patients with key "driver" mutations including: KRAS, HER2, BRAF, MET, PIK3CA, AKT1,MAP2K1, ROS1 and RET. Often several pathways are activated simultaneously and crosstalk between pathways allows tumour cells to escape the inhibition of a single targeted agent. This chapter will explore the clinical development of currently available targeted therapies for NSCLC as well as those in clinical trials and will examine the synergy between cytotoxic therapies.
Resumo:
The global landscape of molecular testing is rapidly changing, with the recent publication of the International Association for the Study of Lung Cancer (IASLC)/College of American Pathologists (CAP) guidelines and the ALK Atlas. The IASLC/CAP guidelines recommend that tumors from patients with non-small cell lung cancer (NSCLC) be tested for ALK rearrangements in addition to epidermal growth factor receptor (EGFR) mutations. The spur for this recommendation is the availability of novel therapies that target these rearrangements. This article is based on coverage of a Pfizer-sponsored National Working Group Meeting on ALK Diagnostics in Lung Cancer, held around the 15th World Lung Cancer Conference, in Sydney on October 31, 2013. It is based on the presentations given by the authors at the meeting and the discussion that ensued. The content for this article was discussed and agreed on by the authors.
Resumo:
This article considers the recent international controversy over the patents held by a Melbourne firm, Genetic Technologies Limited (GTG), in respect of non-coding DNA and genomic mapping. It explores the ramifications of the GTG dispute in terms of licensing, litigation, and policy reform, and—as a result of this dispute—the perceived conflict between law and science. GTG has embarked upon an ambitious licensing program with twenty seven commercial licensees and five research licensees. Most significantly, GTG has obtained an exclusive licence from Myriad Genetics to use and exploit its medical diagnostics in Australia, New Zealand, and the Asia-Pacific region. In the US, GTG brought a legal action for patent infringement against the Applera Corporation and its subsidiaries. In response, Applera counterclaimed that the patents of GTG were invalid because they failed to comply with the requirements of US patent law, such as novelty, inventive step, and written specifications. In New Zealand, the Auckland District Health Board brought legal action in the High Court, seeking a declaration that the patents of GTG were invalid, and that, in any case, the Board has not infringed them. The New Zealand Ministry of Health and the Ministry of Economic Development have reported to Cabinet on the issues relating to the patenting of genetic material. Similarly, the Australian Law Reform Commission (ALRC) has also engaged in an inquiry into gene patents and human health; and the Advisory Council on Intellectual Property (ACIP) has considered whether there should be a new defence in respect of experimental use and research.