938 resultados para incompleteness and inconsistency detection


Relevância:

100.00% 100.00%

Publicador:

Resumo:

As the first arrival of seismic phase in deep seismic sounding, Pg is the important data for studying the attributes of the sedimentary layers and the shape of crystalline basement because of its high intensity and reliable detection. Conventionally, the sedimentary cover is expressed as isotropic, linear increasing model in the interpretation of Pg event. Actually, the sedimentary medium should be anisotropic as preferred cracks or fractures and thin layers are common features in the upper crust, so the interpretation of Pg event needs to be taken account of seismic velocity anisotropy. Traveltime calculation is the base of data processing and interpretation. Here, we only study the type of elliptical anisotropy for the poor quality and insufficiency of DSS data. In this thesis, we first investigate the meaning of elliptical anisotropy in the study of crustal structure and attribute, then derive Pg event’s traveltime-offset relationship by assuming a linear increasing velocity model with elliptical anisotropy and present the invert scheme from Pg traveltime-offset dataset to seismic velocity and its anisotropy of shallow crustal structure. We compare the Pg traveltime calculated by our analytic formula with numerical calculating method to test the accuracy. To get the lateral variation of elliptical anisotropy along the profiling, a tomography inversion method with the derived formula is presented, where the profile is divided into rectangles. Anisotropic imaging of crustal structure and attribute is efficient method for crust study. The imaging result can help us interprete the seismic data and discover the attribute of the rock to analyze the interaction between layers. Traveltime calculation is the base of image. Base on the ray tracing equations, the paper present a realization of three dimension of layer model with arbitrary anisotropic type and an example of Pg traveltime calculation in arbitrary anisotropic type is presented. The traveltime calculation method is complex and it only adapts to nonlinear inversion. Perturbation method of travel-time calculation in anisotropy is the linearization approach. It establishes the direct relation between seismic parameters and travetime and it is fit for inversion in anisotropic structural imaging. The thesis presents a P-wave imaging method of layer media for TTI. Southeastern China is an important part of the tectonic framework concerning the continental margin of eastern China and is commonly assumed to comprise the Yangtze block and the Cathaysia block, the two major tectonic units in the region. It’s a typical geological and geophysical zone. In this part, we fit the traveltime of Pg phase by the raytracing numerical method. But the method is not suitable here because the inefficiency of numerical method and the method itself. By the analytic method, we fit the Pg and Sg and get the lateral variation of elliptical anisotropy and then discuss its implication. The northeastern margin of Qinghai-Tibetan plateau is typical because it is the joint area of Eurasian plate and Indian plate and many strong earthquakes have occurred there in recent years.We use the Pg data to get elliptical anisotropic variation and discuss the possible meaning.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Stress is the most important factor in the vulnerability to depression and other behavioral disorders, but the mechanisms that stress signals are transferred into depression are far from understanding. To date, the neurotransmitters, neurotrophins and signal pathway have been concerned in the topic focusing on the pathophysiology of depression, but there are still many puzzles. Increasing evidence has indicated that the alteration in neuronal plasticity is the “trace” of stress-induced damages. The extracellular signal-regulated protein kinase(ERK)-cyclic-AMP-responsive element(CRE)-binding protein(CREB)signal pathway is a powerful intracellular signal transduction pathway participating in neuronal plasticity which is involved in higher brain cognitive functions such as learning and memory. However, so far, little is known about the role of the ERK-CREB signal pathway in response to stress and emotional modulations. Thus the aim of the study was to systematically investigate the role of the ERK-CEB signal pathway in depressive-like behaviors induced by stress. Depression animal models, antidepressant agent treatment and disruption of signal pathway in specific brain regions were applied. In the present study, three experiment sessions were designed to make sure whether the ERK-CREB signal pathway was indeed one of pathophysiological mechanisms of depressive-like behaviors induced by stress. In experiment one, two different stress animal models were applied, chronic forced swim stress and chronic empty water bottle stress. After stress, all animals were tested behaviorally using open-field, elevated-plus maze and saccharine preference test, and brain samples were processed for determination of ERK, P-ERK, CREB and P-CREB using western blot. The relationships between the proteins of ERK, P-ERK, CREB and P-CREB in the brain and the behavioral variables were also analyzed. In experiment two, rats were treated with antidepressant agent fluoxetine once a day for 21 consecutive days, then the brain levels of ERK, P-ERK, CREB and P-CREB was determined, the depressive-like behaviors were also examined. In experiment three, mitogen activated extracellular-signal-regulated kinase kinase (MEK) inhibitor U0126 was administrated to inhabit the activation of ERK in the hippocampus and prefrontal cortex respectively, then behavioral measurements and protein detection were conducted. The main results of the study were as the following: (1) Chronic forced swim stress induced animals to suffer depression and disrupted the ERK-CREB signal pathway in hippocampus and prefrontal cortex. There were significant correlations between P-ERK2, P-CREB and multiple variables of depressive-like behaviors. (2) Chronic empty water bottle stress did not induce depressive-like behaviors. Such stress decreased the brain level of P-ERK2 in hippocampus and prefrontal cortex, but the level of P-CREB in the hippocampus was increased. (3) The antidepressant agent fluoxetine relieved depressive-like behaviors and increased the activities of the ERK-CREB signal pathway in stressed animals. (4) Animals treated with U0126 injection into hippocampus showed decreased activities of the ERK-CREB signal pathway in the hippocampus, and suffered depression comorbid with anxiety. (5) Animals treated with U0126 injection into prefrontal cortex showed decreased activities of the ERK-CREB signal pathway in the prefrontal cortex, and exhibited depressive-like behaviors. In conclusion, The ERK-CREB signal pathway in the hippocampus and prefrontal cortex was involved in stress responses and significantly correlated with depressive-like behaviors; The ERK-CREB signal pathway in the hippocampus and prefrontal cortex participated in the mechanism that fluoxetine reversed stress-induced behavioral disorders, and might be the target pathway of the therapeutic action of antidepressants; The disruption of the ERK-CREB signal pathway in the hippocampus or prefrontal cortex led to depressive-like behaviors in animals, suggesting that disruption of ERK-CREB pathway in the hippocampus or prefrontal cortex was involved in the pathophysiology of depression, and might be at least one of the mechanisms of depression induced by stress.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Self-regulation has recently become an important topic in cognitive and developmental domain. According to previous theories and experimental studies, it is shown that self-regulation consist of both a personality (or social) aspect and a behavioral cognitive aspect of psychology. Self-regulation can be divided into self-regulation personality and self-regulation ability. In the present study researches have been carried out from two perspectives: child development and individual differences. We are eager to explore the characteristics of self-regulation in terms of human cognitive development. In the present study, we chose two groups of early adolescences one with high intelligence and the other with normal intelligence. In Study One Questionnaires were used to compare whether the highly intelligent group had had better self-regulation personality than the normal group. In Study Two experimental psychology tasks were used to compare whether highly intelligent children had had better self-regulation cognitive abilities than their normal peers. Finally, in Study Three we combined the results of Study One and Study Two to further explore the neural mechanisms for highly intelligent children with respect to their good self-regulation abilities. Some main results and conclusions are as follows: (1) Questionnaire results showed that highly intelligent children had better self-regulation personalities, and they got higher scores on the personalities related to self-regulation such as, self-reliance, stability, rule-consciousness. They also got higher scores on self-consciousness which meant that they could know their own self better than the normal children. (2) Among the three levels of cognitive difficulties in self-regulation abilities, the highly intelligent children had faster reaction speed than normal children in the primary self-regulation tasks. In the intermediate self-regulation tasks, highly intelligent children’s inhibition processing and executive processing were both better than their normal peers. In the advanced self-regulation tasks, highly intelligent children again had faster reaction speed and more reaction accuracy than their normal peers when facing with conflict and inconsistency experimental conditions,. Regression model’s results showed that primary and advanced self-regulation abilites had larger predictive power than intermediate self-regualation ability. (3) Our neural experiments showed that highly intelligent children had more efficient neural automatic processing ability than normal children. They also had better, faster and larger neural reaction to novel stimuli under pre-attentional condition which made good and firm neural basis for self-regualation. Highly intelligent children had more mature frontal lobe and pariental functions for inhibition processing and executive processing. P3 component in ERP was closely related to executive processing which mainly activated pariental function. There were two time-periods for inhibition processing—first it was the pariental function and later it was the coordination function of frontal and pariental lobes. While conflict control task had pariental N2 and frontal-pariental P3 neural sources, highly intelligent children had much smaller N2 and shorter P3 latency than normal children. Inconsistency conditions induced larger N2 than conditions without inconsistency, and conditions without inconsistency (or Conflict) induced higher P3 amplitudes than with Inconsistency (or Conflict) conditions. In conclusion, the healthy development of self-regulation was very important for children’s personality and cognition maturity, and self-regulation had its own specific characteristics in ways of presentation and ways of development. Better understanding of self-regulation can further help the exploration of the nature of human intelligence and consciousness.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Early and intermediate vision algorithms, such as smoothing and discontinuity detection, are often implemented on general-purpose serial, and more recently, parallel computers. Special-purpose hardware implementations of low-level vision algorithms may be needed to achieve real-time processing. This memo reviews and analyzes some hardware implementations of low-level vision algorithms. Two types of hardware implementations are considered: the digital signal processing chips of Ruetz (and Broderson) and the analog VLSI circuits of Carver Mead. The advantages and disadvantages of these two approaches for producing a general, real-time vision system are considered.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Network traffic arises from the superposition of Origin-Destination (OD) flows. Hence, a thorough understanding of OD flows is essential for modeling network traffic, and for addressing a wide variety of problems including traffic engineering, traffic matrix estimation, capacity planning, forecasting and anomaly detection. However, to date, OD flows have not been closely studied, and there is very little known about their properties. We present the first analysis of complete sets of OD flow timeseries, taken from two different backbone networks (Abilene and Sprint-Europe). Using Principal Component Analysis (PCA), we find that the set of OD flows has small intrinsic dimension. In fact, even in a network with over a hundred OD flows, these flows can be accurately modeled in time using a small number (10 or less) of independent components or dimensions. We also show how to use PCA to systematically decompose the structure of OD flow timeseries into three main constituents: common periodic trends, short-lived bursts, and noise. We provide insight into how the various constituents contribute to the overall structure of OD flows and explore the extent to which this decomposition varies over time.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The need for the ability to cluster unknown data to better understand its relationship to know data is prevalent throughout science. Besides a better understanding of the data itself or learning about a new unknown object, cluster analysis can help with processing data, data standardization, and outlier detection. Most clustering algorithms are based on known features or expectations, such as the popular partition based, hierarchical, density-based, grid based, and model based algorithms. The choice of algorithm depends on many factors, including the type of data and the reason for clustering, nearly all rely on some known properties of the data being analyzed. Recently, Li et al. proposed a new universal similarity metric, this metric needs no prior knowledge about the object. Their similarity metric is based on the Kolmogorov Complexity of objects, the objects minimal description. While the Kolmogorov Complexity of an object is not computable, in "Clustering by Compression," Cilibrasi and Vitanyi use common compression algorithms to approximate the universal similarity metric and cluster objects with high success. Unfortunately, clustering using compression does not trivially extend to higher dimensions. Here we outline a method to adapt their procedure to images. We test these techniques on images of letters of the alphabet.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

We study the problem of preprocessing a large graph so that point-to-point shortest-path queries can be answered very fast. Computing shortest paths is a well studied problem, but exact algorithms do not scale to huge graphs encountered on the web, social networks, and other applications. In this paper we focus on approximate methods for distance estimation, in particular using landmark-based distance indexing. This approach involves selecting a subset of nodes as landmarks and computing (offline) the distances from each node in the graph to those landmarks. At runtime, when the distance between a pair of nodes is needed, we can estimate it quickly by combining the precomputed distances of the two nodes to the landmarks. We prove that selecting the optimal set of landmarks is an NP-hard problem, and thus heuristic solutions need to be employed. Given a budget of memory for the index, which translates directly into a budget of landmarks, different landmark selection strategies can yield dramatically different results in terms of accuracy. A number of simple methods that scale well to large graphs are therefore developed and experimentally compared. The simplest methods choose central nodes of the graph, while the more elaborate ones select central nodes that are also far away from one another. The efficiency of the suggested techniques is tested experimentally using five different real world graphs with millions of edges; for a given accuracy, they require as much as 250 times less space than the current approach in the literature which considers selecting landmarks at random. Finally, we study applications of our method in two problems arising naturally in large-scale networks, namely, social search and community detection.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Object detection and recognition are important problems in computer vision. The challenges of these problems come from the presence of noise, background clutter, large within class variations of the object class and limited training data. In addition, the computational complexity in the recognition process is also a concern in practice. In this thesis, we propose one approach to handle the problem of detecting an object class that exhibits large within-class variations, and a second approach to speed up the classification processes. In the first approach, we show that foreground-background classification (detection) and within-class classification of the foreground class (pose estimation) can be jointly solved with using a multiplicative form of two kernel functions. One kernel measures similarity for foreground-background classification. The other kernel accounts for latent factors that control within-class variation and implicitly enables feature sharing among foreground training samples. For applications where explicit parameterization of the within-class states is unavailable, a nonparametric formulation of the kernel can be constructed with a proper foreground distance/similarity measure. Detector training is accomplished via standard Support Vector Machine learning. The resulting detectors are tuned to specific variations in the foreground class. They also serve to evaluate hypotheses of the foreground state. When the image masks for foreground objects are provided in training, the detectors can also produce object segmentation. Methods for generating a representative sample set of detectors are proposed that can enable efficient detection and tracking. In addition, because individual detectors verify hypotheses of foreground state, they can also be incorporated in a tracking-by-detection frame work to recover foreground state in image sequences. To run the detectors efficiently at the online stage, an input-sensitive speedup strategy is proposed to select the most relevant detectors quickly. The proposed approach is tested on data sets of human hands, vehicles and human faces. On all data sets, the proposed approach achieves improved detection accuracy over the best competing approaches. In the second part of the thesis, we formulate a filter-and-refine scheme to speed up recognition processes. The binary outputs of the weak classifiers in a boosted detector are used to identify a small number of candidate foreground state hypotheses quickly via Hamming distance or weighted Hamming distance. The approach is evaluated in three applications: face recognition on the face recognition grand challenge version 2 data set, hand shape detection and parameter estimation on a hand data set, and vehicle detection and estimation of the view angle on a multi-pose vehicle data set. On all data sets, our approach is at least five times faster than simply evaluating all foreground state hypotheses with virtually no loss in classification accuracy.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In this thesis I present the work done during my PhD. The Thesis is divided into two parts; in the first one I present the study of mesoscopic quantum systems whereas in the second one I address the problem of the definition of Markov regime for quantum system dynamics. The first work presented is the study of vortex patterns in (quasi) two dimensional rotating Bose Einstein condensates (BECs). I consider the case of an anisotropy trapping potential and I shall show that the ground state of the system hosts vortex patterns that are unstable. In a second work I designed an experimental scheme to transfer entanglement from two entangled photons to two BECs. This work is meant to propose a feasible experimental set up to bring entanglement from microscopic to macroscopic systems for both the study of fundamental questions (quantum to classical transition) and technological applications. In the last work of the first part another experimental scheme is presented in order to detect coherences of a mechanical oscillator which is assumed to have been previously cooled down to the quantum regime. In this regime in fact the system can rapidly undergo decoherence so that new techniques have to be employed in order to detect and manipulate their states. In the scheme I propose a micro-mechanical oscillator is coupled to a BEC and the detection is performed by monitoring the BEC with a negligible back-action on the cantilever. In the second part of the thesis I give a definition of Markov regime for open quantum dynamics. The importance of such definition comes from both the mathematical description of the system dynamics and from the understanding of the role played by the environment in the evolution of an open system. In the Markov regime the mathematical description can be simplified and the role of the environment is a passive one.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Colorectal cancer is the most common cause of death due to malignancy in nonsmokers in the western world. In 1995 there were 1,757 cases of colon cancer in Ireland. Most colon cancer is sporadic, however ten percent of cases occur where there is a previous family history of the disease. In an attempt to understand the tumorigenic pathway in Irish colon cancer patients, a number of genes associated with colorectal cancer development were analysed in Irish sporadic and HNPCC colon cancer patients. The hereditary forms of colon cancer include Familial adenomatous polyposis coli (FAP) and Hereditary Non-Polyposis Colon Cancer (HNPCC). Genetic analysis of the gene responsible for FAP, (the APC gene) has been previously performed on Irish families, however the genetic analysis of HNPCC families is limited. In an attempt to determine the mutation spectrum in Irish HNPCC pedigrees, the hMSH2 and hMLHl mismatch repair genes were screened in 18 Irish HNPCC families. Using SSCP analysis followed by DNA sequencing, five mutations were identified, four novel and a previously reported mutation. In families where a mutation was detected, younger asyptomatic members were screened for the presence of the predisposing mutation (where possible). Detection of mutations is particularly important for the identification of at risk individuals as the early diagnosis of cancer can vastly improve the prognosis. The sensitive and efficient detection of multiple different mutations and polymorphisms in DNA is of prime importance for genetic diagnosis and the identification of disease genes. A novel mutation detection technique has recently been developed in our laboratory. In order to assess the efficacy and application of the methodology in the analysis of cancer associated genes, a protocol for the analysis of the K-ras gene was developed and optimised. Matched normal and tumour DNA from twenty sporadic colon cancer patients was analysed for K-ras mutations using the Glycosylase Mediated Polymorphism Detection technique. Five mutations of the K-ras gene were detected using this technology. Sequencing analysis verified the presence of the mutations and SSCP analysis of the same samples did not identify any additional mutations. The GMPD technology proved to be highly sensitive, accurate and efficient in the identification of K-ras gene mutations. In order to investigate the role of the replication error phenomenon in Irish colon cancer, 3 polyA tract repeat loci were analysed. The repeat loci included a 10 bp intragenic repeat of the TGF-β-RII gene. TGF-β-RII is involved in the TGF-β epithelial cell growth pathway and mutation of the gene is thought to play a role in cell proliferation and tumorigenesis. Due to the presence of a repeat sequence within the gene, TGFB-RII defects are associated with tumours that display the replication error phenomenon. Analysis of the TGF-β-RII 10 bp repeat failed to identify mutations in any colon cancer patients. Analysis of the Bat26 and Bat 40 polyA repeat sequences in the sporadic and HNPCC families revealed that instability is associated with HNPCC tumours harbouring mismatch repair defects and with 20 % of sporadic colon cancer tumours. No correlation between K-ras gene mutations and the RER+ phenotype was detected in sporadic colon cancer tumours.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The analysis of energy detector systems is a well studied topic in the literature: numerous models have been derived describing the behaviour of single and multiple antenna architectures operating in a variety of radio environments. However, in many cases of interest, these models are not in a closed form and so their evaluation requires the use of numerical methods. In general, these are computationally expensive, which can cause difficulties in certain scenarios, such as in the optimisation of device parameters on low cost hardware. The problem becomes acute in situations where the signal to noise ratio is small and reliable detection is to be ensured or where the number of samples of the received signal is large. Furthermore, due to the analytic complexity of the models, further insight into the behaviour of various system parameters of interest is not readily apparent. In this thesis, an approximation based approach is taken towards the analysis of such systems. By focusing on the situations where exact analyses become complicated, and making a small number of astute simplifications to the underlying mathematical models, it is possible to derive novel, accurate and compact descriptions of system behaviour. Approximations are derived for the analysis of energy detectors with single and multiple antennae operating on additive white Gaussian noise (AWGN) and independent and identically distributed Rayleigh, Nakagami-m and Rice channels; in the multiple antenna case, approximations are derived for systems with maximal ratio combiner (MRC), equal gain combiner (EGC) and square law combiner (SLC) diversity. In each case, error bounds are derived describing the maximum error resulting from the use of the approximations. In addition, it is demonstrated that the derived approximations require fewer computations of simple functions than any of the exact models available in the literature. Consequently, the regions of applicability of the approximations directly complement the regions of applicability of the available exact models. Further novel approximations for other system parameters of interest, such as sample complexity, minimum detectable signal to noise ratio and diversity gain, are also derived. In the course of the analysis, a novel theorem describing the convergence of the chi square, noncentral chi square and gamma distributions towards the normal distribution is derived. The theorem describes a tight upper bound on the error resulting from the application of the central limit theorem to random variables of the aforementioned distributions and gives a much better description of the resulting error than existing Berry-Esseen type bounds. A second novel theorem, providing an upper bound on the maximum error resulting from the use of the central limit theorem to approximate the noncentral chi square distribution where the noncentrality parameter is a multiple of the number of degrees of freedom, is also derived.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The overall objective of this thesis is to integrate a number of micro/nanotechnologies into integrated cartridge type systems to implement such biochemical protocols. Instrumentation and systems were developed to interface such cartridge systems: (i) implementing microfluidic handling, (ii) executing thermal control during biochemical protocols and (iii) detection of biomolecules associated with inherited or infectious disease. This system implements biochemical protocols for DNA extraction, amplification and detection. A digital microfluidic chip (ElectroWetting on Dielectric) manipulated droplets of sample and reagent implementing sample preparation protocols. The cartridge system also integrated a planar magnetic microcoil device to generate local magnetic field gradients, manipulating magnetic beads. For hybridisation detection a fluorescence microarray, screening for mutations associated with CFTR gene is printed on a waveguide surface and integrated within the cartridge. A second cartridge system was developed to implement amplification and detection screening for DNA associated with disease-causing pathogens e.g. Escherichia coli. This system incorporates (i) elastomeric pinch valves isolating liquids during biochemical protocols and (ii) a silver nanoparticle microarray for fluorescent signal enhancement, using localized surface plasmon resonance. The microfluidic structures facilitated the sample and reagent to be loaded and moved between chambers with external heaters implementing thermal steps for nucleic acid amplification and detection. In a technique allowing probe DNA to be immobilised within a microfluidic system using (3D) hydrogel structures a prepolymer solution containing probe DNA was formulated and introduced into the microfluidic channel. Photo-polymerisation was undertaken forming 3D hydrogel structures attached to the microfluidic channel surface. The prepolymer material, poly-ethyleneglycol (PEG), was used to form hydrogel structures containing probe DNA. This hydrogel formulation process was fast compared to conventional biomolecule immobilization techniques and was also biocompatible with the immobilised biomolecules, as verified by on-chip hybridisation assays. This process allowed control over hydrogel height growth at the micron scale.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In the field of embedded systems design, coprocessors play an important role as a component to increase performance. Many embedded systems are built around a small General Purpose Processor (GPP). If the GPP cannot meet the performance requirements for a certain operation, a coprocessor can be included in the design. The GPP can then offload the computationally intensive operation to the coprocessor; thus increasing the performance of the overall system. A common application of coprocessors is the acceleration of cryptographic algorithms. The work presented in this thesis discusses coprocessor architectures for various cryptographic algorithms that are found in many cryptographic protocols. Their performance is then analysed on a Field Programmable Gate Array (FPGA) platform. Firstly, the acceleration of Elliptic Curve Cryptography (ECC) algorithms is investigated through the use of instruction set extension of a GPP. The performance of these algorithms in a full hardware implementation is then investigated, and an architecture for the acceleration the ECC based digital signature algorithm is developed. Hash functions are also an important component of a cryptographic system. The FPGA implementation of recent hash function designs from the SHA-3 competition are discussed and a fair comparison methodology for hash functions presented. Many cryptographic protocols involve the generation of random data, for keys or nonces. This requires a True Random Number Generator (TRNG) to be present in the system. Various TRNG designs are discussed and a secure implementation, including post-processing and failure detection, is introduced. Finally, a coprocessor for the acceleration of operations at the protocol level will be discussed, where, a novel aspect of the design is the secure method in which private-key data is handled

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This study investigated whether rhesus monkeys show evidence of metacognition in a reduced, visual oculomotor task that is particularly suitable for use in fMRI and electrophysiology. The 2-stage task involved punctate visual stimulation and saccadic eye movement responses. In each trial, monkeys made a decision and then made a bet. To earn maximum reward, they had to monitor their decision and use that information to bet advantageously. Two monkeys learned to base their bets on their decisions within a few weeks. We implemented an operational definition of metacognitive behavior that relied on trial-by-trial analyses and signal detection theory. Both monkeys exhibited metacognition according to these quantitative criteria. Neither external visual cues nor potential reaction time cues explained the betting behavior; the animals seemed to rely exclusively on internal traces of their decisions. We documented the learning process of one monkey. During a 10-session transition phase, betting switched from random to a decision-based strategy. The results reinforce previous findings of metacognitive ability in monkeys and may facilitate the neurophysiological investigation of metacognitive functions.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Histopathology is the clinical standard for tissue diagnosis. However, histopathology has several limitations including that it requires tissue processing, which can take 30 minutes or more, and requires a highly trained pathologist to diagnose the tissue. Additionally, the diagnosis is qualitative, and the lack of quantitation leads to possible observer-specific diagnosis. Taken together, it is difficult to diagnose tissue at the point of care using histopathology.

Several clinical situations could benefit from more rapid and automated histological processing, which could reduce the time and the number of steps required between obtaining a fresh tissue specimen and rendering a diagnosis. For example, there is need for rapid detection of residual cancer on the surface of tumor resection specimens during excisional surgeries, which is known as intraoperative tumor margin assessment. Additionally, rapid assessment of biopsy specimens at the point-of-care could enable clinicians to confirm that a suspicious lesion is successfully sampled, thus preventing an unnecessary repeat biopsy procedure. Rapid and low cost histological processing could also be potentially useful in settings lacking the human resources and equipment necessary to perform standard histologic assessment. Lastly, automated interpretation of tissue samples could potentially reduce inter-observer error, particularly in the diagnosis of borderline lesions.

To address these needs, high quality microscopic images of the tissue must be obtained in rapid timeframes, in order for a pathologic assessment to be useful for guiding the intervention. Optical microscopy is a powerful technique to obtain high-resolution images of tissue morphology in real-time at the point of care, without the need for tissue processing. In particular, a number of groups have combined fluorescence microscopy with vital fluorescent stains to visualize micro-anatomical features of thick (i.e. unsectioned or unprocessed) tissue. However, robust methods for segmentation and quantitative analysis of heterogeneous images are essential to enable automated diagnosis. Thus, the goal of this work was to obtain high resolution imaging of tissue morphology through employing fluorescence microscopy and vital fluorescent stains and to develop a quantitative strategy to segment and quantify tissue features in heterogeneous images, such as nuclei and the surrounding stroma, which will enable automated diagnosis of thick tissues.

To achieve these goals, three specific aims were proposed. The first aim was to develop an image processing method that can differentiate nuclei from background tissue heterogeneity and enable automated diagnosis of thick tissue at the point of care. A computational technique called sparse component analysis (SCA) was adapted to isolate features of interest, such as nuclei, from the background. SCA has been used previously in the image processing community for image compression, enhancement, and restoration, but has never been applied to separate distinct tissue types in a heterogeneous image. In combination with a high resolution fluorescence microendoscope (HRME) and a contrast agent acriflavine, the utility of this technique was demonstrated through imaging preclinical sarcoma tumor margins. Acriflavine localizes to the nuclei of cells where it reversibly associates with RNA and DNA. Additionally, acriflavine shows some affinity for collagen and muscle. SCA was adapted to isolate acriflavine positive features or APFs (which correspond to RNA and DNA) from background tissue heterogeneity. The circle transform (CT) was applied to the SCA output to quantify the size and density of overlapping APFs. The sensitivity of the SCA+CT approach to variations in APF size, density and background heterogeneity was demonstrated through simulations. Specifically, SCA+CT achieved the lowest errors for higher contrast ratios and larger APF sizes. When applied to tissue images of excised sarcoma margins, SCA+CT correctly isolated APFs and showed consistently increased density in tumor and tumor + muscle images compared to images containing muscle. Next, variables were quantified from images of resected primary sarcomas and used to optimize a multivariate model. The sensitivity and specificity for differentiating positive from negative ex vivo resected tumor margins was 82% and 75%. The utility of this approach was further tested by imaging the in vivo tumor cavities from 34 mice after resection of a sarcoma with local recurrence as a bench mark. When applied prospectively to images from the tumor cavity, the sensitivity and specificity for differentiating local recurrence was 78% and 82%. The results indicate that SCA+CT can accurately delineate APFs in heterogeneous tissue, which is essential to enable automated and rapid surveillance of tissue pathology.

Two primary challenges were identified in the work in aim 1. First, while SCA can be used to isolate features, such as APFs, from heterogeneous images, its performance is limited by the contrast between APFs and the background. Second, while it is feasible to create mosaics by scanning a sarcoma tumor bed in a mouse, which is on the order of 3-7 mm in any one dimension, it is not feasible to evaluate an entire human surgical margin. Thus, improvements to the microscopic imaging system were made to (1) improve image contrast through rejecting out-of-focus background fluorescence and to (2) increase the field of view (FOV) while maintaining the sub-cellular resolution needed for delineation of nuclei. To address these challenges, a technique called structured illumination microscopy (SIM) was employed in which the entire FOV is illuminated with a defined spatial pattern rather than scanning a focal spot, such as in confocal microscopy.

Thus, the second aim was to improve image contrast and increase the FOV through employing wide-field, non-contact structured illumination microscopy and optimize the segmentation algorithm for new imaging modality. Both image contrast and FOV were increased through the development of a wide-field fluorescence SIM system. Clear improvement in image contrast was seen in structured illumination images compared to uniform illumination images. Additionally, the FOV is over 13X larger than the fluorescence microendoscope used in aim 1. Initial segmentation results of SIM images revealed that SCA is unable to segment large numbers of APFs in the tumor images. Because the FOV of the SIM system is over 13X larger than the FOV of the fluorescence microendoscope, dense collections of APFs commonly seen in tumor images could no longer be sparsely represented, and the fundamental sparsity assumption associated with SCA was no longer met. Thus, an algorithm called maximally stable extremal regions (MSER) was investigated as an alternative approach for APF segmentation in SIM images. MSER was able to accurately segment large numbers of APFs in SIM images of tumor tissue. In addition to optimizing MSER for SIM image segmentation, an optimal frequency of the illumination pattern used in SIM was carefully selected because the image signal to noise ratio (SNR) is dependent on the grid frequency. A grid frequency of 31.7 mm-1 led to the highest SNR and lowest percent error associated with MSER segmentation.

Once MSER was optimized for SIM image segmentation and the optimal grid frequency was selected, a quantitative model was developed to diagnose mouse sarcoma tumor margins that were imaged ex vivo with SIM. Tumor margins were stained with acridine orange (AO) in aim 2 because AO was found to stain the sarcoma tissue more brightly than acriflavine. Both acriflavine and AO are intravital dyes, which have been shown to stain nuclei, skeletal muscle, and collagenous stroma. A tissue-type classification model was developed to differentiate localized regions (75x75 µm) of tumor from skeletal muscle and adipose tissue based on the MSER segmentation output. Specifically, a logistic regression model was used to classify each localized region. The logistic regression model yielded an output in terms of probability (0-100%) that tumor was located within each 75x75 µm region. The model performance was tested using a receiver operator characteristic (ROC) curve analysis that revealed 77% sensitivity and 81% specificity. For margin classification, the whole margin image was divided into localized regions and this tissue-type classification model was applied. In a subset of 6 margins (3 negative, 3 positive), it was shown that with a tumor probability threshold of 50%, 8% of all regions from negative margins exceeded this threshold, while over 17% of all regions exceeded the threshold in the positive margins. Thus, 8% of regions in negative margins were considered false positives. These false positive regions are likely due to the high density of APFs present in normal tissues, which clearly demonstrates a challenge in implementing this automatic algorithm based on AO staining alone.

Thus, the third aim was to improve the specificity of the diagnostic model through leveraging other sources of contrast. Modifications were made to the SIM system to enable fluorescence imaging at a variety of wavelengths. Specifically, the SIM system was modified to enabling imaging of red fluorescent protein (RFP) expressing sarcomas, which were used to delineate the location of tumor cells within each image. Initial analysis of AO stained panels confirmed that there was room for improvement in tumor detection, particularly in regards to false positive regions that were negative for RFP. One approach for improving the specificity of the diagnostic model was to investigate using a fluorophore that was more specific to staining tumor. Specifically, tetracycline was selected because it appeared to specifically stain freshly excised tumor tissue in a matter of minutes, and was non-toxic and stable in solution. Results indicated that tetracycline staining has promise for increasing the specificity of tumor detection in SIM images of a preclinical sarcoma model and further investigation is warranted.

In conclusion, this work presents the development of a combination of tools that is capable of automated segmentation and quantification of micro-anatomical images of thick tissue. When compared to the fluorescence microendoscope, wide-field multispectral fluorescence SIM imaging provided improved image contrast, a larger FOV with comparable resolution, and the ability to image a variety of fluorophores. MSER was an appropriate and rapid approach to segment dense collections of APFs from wide-field SIM images. Variables that reflect the morphology of the tissue, such as the density, size, and shape of nuclei and nucleoli, can be used to automatically diagnose SIM images. The clinical utility of SIM imaging and MSER segmentation to detect microscopic residual disease has been demonstrated by imaging excised preclinical sarcoma margins. Ultimately, this work demonstrates that fluorescence imaging of tissue micro-anatomy combined with a specialized algorithm for delineation and quantification of features is a means for rapid, non-destructive and automated detection of microscopic disease, which could improve cancer management in a variety of clinical scenarios.