950 resultados para Information Discovery Paradigm,
Resumo:
A nonlinear viscoelastic image registration algorithm based on the demons paradigm and incorporating inverse consistent constraint (ICC) is implemented. An inverse consistent and symmetric cost function using mutual information (MI) as a similarity measure is employed. The cost function also includes regularization of transformation and inverse consistent error (ICE). The uncertainties in balancing various terms in the cost function are avoided by alternatively minimizing the similarity measure, the regularization of the transformation, and the ICE terms. The diffeomorphism of registration for preventing folding and/or tearing in the deformation is achieved by the composition scheme. The quality of image registration is first demonstrated by constructing brain atlas from 20 adult brains (age range 30-60). It is shown that with this registration technique: (1) the Jacobian determinant is positive for all voxels and (2) the average ICE is around 0.004 voxels with a maximum value below 0.1 voxels. Further, the deformation-based segmentation on Internet Brain Segmentation Repository, a publicly available dataset, has yielded high Dice similarity index (DSI) of 94.7% for the cerebellum and 74.7% for the hippocampus, attesting to the quality of our registration method.
Resumo:
We investigated attention, encoding and processing of social aspects of complex photographic scenes. Twenty-four high-functioning adolescents (aged 11–16) with ASD and 24 typically developing matched control participants viewed and then described a series of scenes, each containing a person. Analyses of eye movements and verbal descriptions provided converging evidence that both groups displayed general interest in the person in each scene but the salience of the person was reduced for the ASD participants. Nevertheless, the verbal descriptions revealed that participants with ASD frequently processed the observed person’s emotion or mental state without prompting. They also often mentioned eye-gaze direction, and there was evidence from eye movements and verbal descriptions that gaze was followed accurately. The combination of evidence from eye movements and verbal descriptions provides a rich insight into the way stimuli are processed overall. The merits of using these methods within the same paradigm are discussed.
Resumo:
In this paper, we describe dynamic unicast to increase communication efficiency in opportunistic Information-centric networks. The approach is based on broadcast requests to quickly find content and dynamically creating unicast links to content sources without the need of neighbor discovery. The links are kept temporarily as long as they deliver content and are quickly removed otherwise. Evaluations in mobile networks show that this approach maintains ICN flexibility to support seamless mobile communication and achieves up to 56.6% shorter transmission times compared to broadcast in case of multiple concurrent requesters. Apart from that, dynamic unicast unburdens listener nodes from processing unwanted content resulting in lower processing overhead and power consumption at these nodes. The approach can be easily included into existing ICN architectures using only available data structures.
Resumo:
With the current growth of mobile devices usage, mobile net- works struggle to deliver content with an acceptable Quality of Experience. In this paper, we propose the integration of Information Centric Networking into 3GPP Long Term Evolution mobile networks, allowing its inherent caching feature to be explored in close proximity to the end users by deploying components inside the evolved Node B. Apart from the advantages brought by Information-Centric Networking’s content requesting paradigm, its inherent caching features enable lower latencies to access content and reduce traffic at the core network. Results show that the impact on the evolved Node B performance is low and ad- vantages coming from Information-Centric Networking are considerable. Thus, mobile network operators reduce operational costs and users end up with a higher perceived network quality even in peak utilization periods.
Resumo:
Information-centric networking (ICN) is a new communication paradigm that aims at increasing security and efficiency of content delivery in communication networks. In recent years, many research efforts in ICN have focused on caching strategies to reduce traffic and increase overall performance by decreasing download times. Since caches need to operate at line speed, they have only a limited size and content can only be stored for a short time. However, if content needs to be available for a longer time, e.g., for delay-tolerant networking or to provide high content availability similar to content delivery networks (CDNs), persistent caching is required. We base our work on the Content-Centric Networking (CCN) architecture and investigate persistent caching by extending the current repository implementation in CCNx. We show by extensive evaluations in a YouTube and webserver traffic scenario that repositories can be efficiently used to increase content availability by significantly increasing cache hit rates.
Resumo:
Information-centric networking (ICN) is a new communication paradigm that aims at increasing security and efficiency of content delivery in communication networks. In recent years, many research efforts in ICN have focused on caching strategies to reduce traffic and increase overall performance by decreasing download times. Since caches need to operate at line-speed, they have only a limited size and content can only be stored for a short time. However, if content needs to be available for a longer time, e.g., for delay-tolerant networking or to provide high content availability similar to content delivery networks (CDNs), persistent caching is required. We base our work on the Content-Centric Networking (CCN) architecture and investigate persistent caching by extending the current repository implementation in CCNx. We show by extensive evaluations in a YouTube and webserver traffic scenario that repositories can be efficiently used to increase content availability by significantly increasing the cache hit rates.
Resumo:
The mental speed approach explains individual differences in intelligence by faster information processing in individuals with higher compared to lower intelligence - especially in elementary cognitive tasks (ECTs). One of the most examined ECTs is the Hick paradigm. The present study aimed to contrast reaction time (RT) and P3 latency in a Hick task as predictors of intelligence. Although both, RT and P3 latency, are commonly used as indicators of mental speed, it is also known that they measure different aspects of information processing. Participants were 113 female students. RT and P3 latency were measured while participants completed the Hick task with four levels of complexity. Intelligence was assessed with Cattell's Culture Fair Test. A RT factor and a P3 factor were extracted by employing a PCA across complexity levels. There was no significant correlation between the factors. Commonality analysis was used to determine the proportions of unique and shared variance in intelligence explained by the RT and P3 latency factors. RT and P3 latency explained 5.5% and 5% of unique variance in intelligence. However, the two speed factors did not explain a significant portion of shared variance. This result suggests that RT and P3 latency in the Hick paradigm are measuring different aspects of information processing that explain different parts of variance in intelligence.
Resumo:
Abstract Information-centric networking (ICN) offers new perspectives on mobile ad-hoc communication because routing is based on names but not on endpoint identifiers. Since every content object has a unique name and is signed, authentic content can be stored and cached by any node. If connectivity to a content source breaks, it is not necessarily required to build a new path to the same source but content can also be retrieved from a closer node that provides the same content copy. For example, in case of collisions, retransmissions do not need to be performed over the entire path but due to caching only over the link where the collision occurred. Furthermore, multiple requests can be aggregated to improve scalability of wireless multi-hop communication. In this work, we base our investigations on Content-Centric Networking (CCN), which is a popular {ICN} architecture. While related works in wireless {CCN} communication are based on broadcast communication exclusively, we show that this is not needed for efficient mobile ad-hoc communication. With Dynamic Unicast requesters can build unicast paths to content sources after they have been identified via broadcast. We have implemented Dynamic Unicast in CCNx, which provides a reference implementation of the {CCN} concepts, and performed extensive evaluations in diverse mobile scenarios using NS3-DCE, the direct code execution framework for the {NS3} network simulator. Our evaluations show that Dynamic Unicast can result in more efficient communication than broadcast communication, but still supports all {CCN} advantages such as caching, scalability and implicit content discovery.
Resumo:
This study examined the effectiveness of discovery learning and direct instruction in a diverse second grade classroom. An assessment test and transfer task were given to students to examine which method of instruction enabled the students to grasp the content of a science lesson to a greater extent. Results demonstrated that students in the direct instruction group scored higher on the assessment test and completed the transfer task at a faster pace; however, this was not statistically significant. Results also suggest that a mixture of instructional styles would serve to effectively disseminate information, as well as motivate students to learn.
Resumo:
In population studies, most current methods focus on identifying one outcome-related SNP at a time by testing for differences of genotype frequencies between disease and healthy groups or among different population groups. However, testing a great number of SNPs simultaneously has a problem of multiple testing and will give false-positive results. Although, this problem can be effectively dealt with through several approaches such as Bonferroni correction, permutation testing and false discovery rates, patterns of the joint effects by several genes, each with weak effect, might not be able to be determined. With the availability of high-throughput genotyping technology, searching for multiple scattered SNPs over the whole genome and modeling their joint effect on the target variable has become possible. Exhaustive search of all SNP subsets is computationally infeasible for millions of SNPs in a genome-wide study. Several effective feature selection methods combined with classification functions have been proposed to search for an optimal SNP subset among big data sets where the number of feature SNPs far exceeds the number of observations. ^ In this study, we take two steps to achieve the goal. First we selected 1000 SNPs through an effective filter method and then we performed a feature selection wrapped around a classifier to identify an optimal SNP subset for predicting disease. And also we developed a novel classification method-sequential information bottleneck method wrapped inside different search algorithms to identify an optimal subset of SNPs for classifying the outcome variable. This new method was compared with the classical linear discriminant analysis in terms of classification performance. Finally, we performed chi-square test to look at the relationship between each SNP and disease from another point of view. ^ In general, our results show that filtering features using harmononic mean of sensitivity and specificity(HMSS) through linear discriminant analysis (LDA) is better than using LDA training accuracy or mutual information in our study. Our results also demonstrate that exhaustive search of a small subset with one SNP, two SNPs or 3 SNP subset based on best 100 composite 2-SNPs can find an optimal subset and further inclusion of more SNPs through heuristic algorithm doesn't always increase the performance of SNP subsets. Although sequential forward floating selection can be applied to prevent from the nesting effect of forward selection, it does not always out-perform the latter due to overfitting from observing more complex subset states. ^ Our results also indicate that HMSS as a criterion to evaluate the classification ability of a function can be used in imbalanced data without modifying the original dataset as against classification accuracy. Our four studies suggest that Sequential Information Bottleneck(sIB), a new unsupervised technique, can be adopted to predict the outcome and its ability to detect the target status is superior to the traditional LDA in the study. ^ From our results we can see that the best test probability-HMSS for predicting CVD, stroke,CAD and psoriasis through sIB is 0.59406, 0.641815, 0.645315 and 0.678658, respectively. In terms of group prediction accuracy, the highest test accuracy of sIB for diagnosing a normal status among controls can reach 0.708999, 0.863216, 0.639918 and 0.850275 respectively in the four studies if the test accuracy among cases is required to be not less than 0.4. On the other hand, the highest test accuracy of sIB for diagnosing a disease among cases can reach 0.748644, 0.789916, 0.705701 and 0.749436 respectively in the four studies if the test accuracy among controls is required to be at least 0.4. ^ A further genome-wide association study through Chi square test shows that there are no significant SNPs detected at the cut-off level 9.09451E-08 in the Framingham heart study of CVD. Study results in WTCCC can only detect two significant SNPs that are associated with CAD. In the genome-wide study of psoriasis most of top 20 SNP markers with impressive classification accuracy are also significantly associated with the disease through chi-square test at the cut-off value 1.11E-07. ^ Although our classification methods can achieve high accuracy in the study, complete descriptions of those classification results(95% confidence interval or statistical test of differences) require more cost-effective methods or efficient computing system, both of which can't be accomplished currently in our genome-wide study. We should also note that the purpose of this study is to identify subsets of SNPs with high prediction ability and those SNPs with good discriminant power are not necessary to be causal markers for the disease.^
Resumo:
Chromatin, composed of repeating nucleosome units, is the genetic polymer of life. To aid in DNA compaction and organized storage, the double helix wraps around a core complex of histone proteins to form the nucleosome, and is therefore no longer freely accessible to cellular proteins for the processes of transcription, replication and DNA repair. Over the course of evolution, DNA-based applications have developed routes to access DNA bound up in chromatin, and further, have actually utilized the chromatin structure to create another level of complexity and information storage. The histone molecules that DNA surrounds have free-floating tails that extend out of the nucleosome. These tails are post-translationally modified to create docking sites for the proteins involved in transcription, replication and repair, thus providing one prominent way that specific genomic sequences are accessed and manipulated. Adding another degree of information storage, histone tail-modifications paint the genome in precise manners to influence a state of transcriptional activity or repression, to generate euchromatin, containing gene-dense regions, or heterochromatin, containing repeat sequences and low-density gene regions. The work presented here is the study of histone tail modifications, how they are written and how they are read, divided into two projects. Both begin with protein microarray experiments where we discover the protein domains that can bind modified histone tails, and how multiple tail modifications can influence this binding. Project one then looks deeper into the enzymes that lay down the tail modifications. Specifically, we studied histone-tail arginine methylation by PRMT6. We found that methylation of a specific histone residue by PRMT6, arginine 2 of H3, can antagonize the binding of protein domains to the H3 tail and therefore affect transcription of genes regulated by the H3-tail binding proteins. Project two focuses on a protein we identified to bind modified histone tails, PHF20, and was an endeavor to discover the biological role of this protein. Thus, in total, we are looking at a complete process: (1) histone tail modification by an enzyme (here, PRMT6), (2) how this and other modifications are bound by conserved protein domains, and (3) by using PHF20 as an example, the functional outcome of binding through investigating the biological role of a chromatin reader. ^
Resumo:
Development of homology modeling methods will remain an area of active research. These methods aim to develop and model increasingly accurate three-dimensional structures of yet uncrystallized therapeutically relevant proteins e.g. Class A G-Protein Coupled Receptors. Incorporating protein flexibility is one way to achieve this goal. Here, I will discuss the enhancement and validation of the ligand-steered modeling, originally developed by Dr. Claudio Cavasotto, via cross modeling of the newly crystallized GPCR structures. This method uses known ligands and known experimental information to optimize relevant protein binding sites by incorporating protein flexibility. The ligand-steered models were able to model, reasonably reproduce binding sites and the co-crystallized native ligand poses of the β2 adrenergic and Adenosine 2A receptors using a single template structure. They also performed better than the choice of template, and crude models in a small scale high-throughput docking experiments and compound selectivity studies. Next, the application of this method to develop high-quality homology models of Cannabinoid Receptor 2, an emerging non-psychotic pain management target, is discussed. These models were validated by their ability to rationalize structure activity relationship data of two, inverse agonist and agonist, series of compounds. The method was also applied to improve the virtual screening performance of the β2 adrenergic crystal structure by optimizing the binding site using β2 specific compounds. These results show the feasibility of optimizing only the pharmacologically relevant protein binding sites and applicability to structure-based drug design projects.