27 resultados para Transceiver architectures
Resumo:
A lot of research in cognition and decision making suffers from a lack of formalism. The quantum probability program could help to improve this situation, but we wonder whether it would provide even more added value if its presumed focus on outcome models were complemented by process models that are, ideally, informed by ecological analyses and integrated into cognitive architectures.
Resumo:
Diffusion magnetic resonance studies of the brain are typically performed using volume coils. Although in human brain this leads to a near optimal filling factor, studies of rodent brain must contend with the fact that only a fraction of the head volume can be ascribed to the brain. The use of surface coil as transceiver increases Signal-to-Noise Ratio (SNR), reduces radiofrequency power requirements and opens the possibility of parallel transmit schemes, likely to allow efficient acquisition schemes, of critical importance for reducing the long scan times implicated in diffusion tensor imaging. This study demonstrates the implementation of a semiadiabatic echo planar imaging sequence (echo time=40 ms, four interleaves) at 14.1T using a quadrature surface coil as transceiver. It resulted in artifact free images with excellent SNR throughout the brain. Diffusion tensor derived parameters obtained within the rat brain were in excellent agreement with reported values.
Resumo:
In The Cognitive-Emotional Brain, Pessoa (2013) suggests that cognition and emotion should not be considered separately. We agree with this and argue that cognitive architectures can provide steady ground for this kind of theory integration and for investigating interactions among underlying cognitive processes. We briefly explore how affective components can be implemented and how neuroimaging measures can help validate models and influence theory development.
Resumo:
ABSTRACT: q-Space-based techniques such as diffusion spectrum imaging, q-ball imaging, and their variations have been used extensively in research for their desired capability to delineate complex neuronal architectures such as multiple fiber crossings in each of the image voxels. The purpose of this article was to provide an introduction to the q-space formalism and the principles of basic q-space techniques together with the discussion on the advantages as well as challenges in translating these techniques into the clinical environment. A review of the currently used q-space-based protocols in clinical research is also provided.
Resumo:
This paper presents general problems and approaches for the spatial data analysis using machine learning algorithms. Machine learning is a very powerful approach to adaptive data analysis, modelling and visualisation. The key feature of the machine learning algorithms is that they learn from empirical data and can be used in cases when the modelled environmental phenomena are hidden, nonlinear, noisy and highly variable in space and in time. Most of the machines learning algorithms are universal and adaptive modelling tools developed to solve basic problems of learning from data: classification/pattern recognition, regression/mapping and probability density modelling. In the present report some of the widely used machine learning algorithms, namely artificial neural networks (ANN) of different architectures and Support Vector Machines (SVM), are adapted to the problems of the analysis and modelling of geo-spatial data. Machine learning algorithms have an important advantage over traditional models of spatial statistics when problems are considered in a high dimensional geo-feature spaces, when the dimension of space exceeds 5. Such features are usually generated, for example, from digital elevation models, remote sensing images, etc. An important extension of models concerns considering of real space constrains like geomorphology, networks, and other natural structures. Recent developments in semi-supervised learning can improve modelling of environmental phenomena taking into account on geo-manifolds. An important part of the study deals with the analysis of relevant variables and models' inputs. This problem is approached by using different feature selection/feature extraction nonlinear tools. To demonstrate the application of machine learning algorithms several interesting case studies are considered: digital soil mapping using SVM, automatic mapping of soil and water system pollution using ANN; natural hazards risk analysis (avalanches, landslides), assessments of renewable resources (wind fields) with SVM and ANN models, etc. The dimensionality of spaces considered varies from 2 to more than 30. Figures 1, 2, 3 demonstrate some results of the studies and their outputs. Finally, the results of environmental mapping are discussed and compared with traditional models of geostatistics.
Resumo:
Background The 'database search problem', that is, the strengthening of a case - in terms of probative value - against an individual who is found as a result of a database search, has been approached during the last two decades with substantial mathematical analyses, accompanied by lively debate and centrally opposing conclusions. This represents a challenging obstacle in teaching but also hinders a balanced and coherent discussion of the topic within the wider scientific and legal community. This paper revisits and tracks the associated mathematical analyses in terms of Bayesian networks. Their derivation and discussion for capturing probabilistic arguments that explain the database search problem are outlined in detail. The resulting Bayesian networks offer a distinct view on the main debated issues, along with further clarity. Methods As a general framework for representing and analyzing formal arguments in probabilistic reasoning about uncertain target propositions (that is, whether or not a given individual is the source of a crime stain), this paper relies on graphical probability models, in particular, Bayesian networks. This graphical probability modeling approach is used to capture, within a single model, a series of key variables, such as the number of individuals in a database, the size of the population of potential crime stain sources, and the rarity of the corresponding analytical characteristics in a relevant population. Results This paper demonstrates the feasibility of deriving Bayesian network structures for analyzing, representing, and tracking the database search problem. The output of the proposed models can be shown to agree with existing but exclusively formulaic approaches. Conclusions The proposed Bayesian networks allow one to capture and analyze the currently most well-supported but reputedly counter-intuitive and difficult solution to the database search problem in a way that goes beyond the traditional, purely formulaic expressions. The method's graphical environment, along with its computational and probabilistic architectures, represents a rich package that offers analysts and discussants with additional modes of interaction, concise representation, and coherent communication.
Resumo:
Metabolic problems lead to numerous failures during clinical trials, and much effort is now devoted to developing in silico models predicting metabolic stability and metabolites. Such models are well known for cytochromes P450 and some transferases, whereas less has been done to predict the activity of human hydrolases. The present study was undertaken to develop a computational approach able to predict the hydrolysis of novel esters by human carboxylesterase hCES2. The study involved first a homology modeling of the hCES2 protein based on the model of hCES1 since the two proteins share a high degree of homology (congruent with 73%). A set of 40 known substrates of hCES2 was taken from the literature; the ligands were docked in both their neutral and ionized forms using GriDock, a parallel tool based on the AutoDock4.0 engine which can perform efficient and easy virtual screening analyses of large molecular databases exploiting multi-core architectures. Useful statistical models (e.g., r (2) = 0.91 for substrates in their unprotonated state) were calculated by correlating experimental pK(m) values with distance between the carbon atom of the substrate's ester group and the hydroxy function of Ser228. Additional parameters in the equations accounted for hydrophobic and electrostatic interactions between substrates and contributing residues. The negatively charged residues in the hCES2 cavity explained the preference of the enzyme for neutral substrates and, more generally, suggested that ligands which interact too strongly by ionic bonds (e.g., ACE inhibitors) cannot be good CES2 substrates because they are trapped in the cavity in unproductive modes and behave as inhibitors. The effects of protonation on substrate recognition and the contrasting behavior of substrates and products were finally investigated by MD simulations of some CES2 complexes.
Resumo:
Methods are presented to map complex fiber architectures in tissues by imaging the 3D spectra of tissue water diffusion with MR. First, theoretical considerations show why and under what conditions diffusion contrast is positive. Using this result, spin displacement spectra that are conventionally phase-encoded can be accurately reconstructed by a Fourier transform of the measured signal's modulus. Second, studies of in vitro and in vivo samples demonstrate correspondence between the orientational maxima of the diffusion spectrum and those of the fiber orientation density at each location. In specimens with complex muscular tissue, such as the tongue, diffusion spectrum images show characteristic local heterogeneities of fiber architectures, including angular dispersion and intersection. Cerebral diffusion spectra acquired in normal human subjects resolve known white matter tracts and tract intersections. Finally, the relation between the presented model-free imaging technique and other available diffusion MRI schemes is discussed.
Resumo:
It is a well established fact that the entry of women into higher-level professional occupations has not resulted in their equal distribution within these occupations. Indeed, the emergence and persistence of horizontal and vertical gender segregation within the professions has been at the heart of the development of a range of alternative theoretical perspectives on both the "feminisation process" and the future of the "professions"more generally. Through an in-depth comparative analysis of the recent changes in the organisation and administration of the medical profession in Britain and France, this paper draws upon statistical data and biographical interviews with male and female general practitioners (GPs) in both countries in order to discuss and review a variety of approaches that have been adopted to explain and analyse the "eminisation" process of higher-level professions. Our conclusions review the theoretical debates in the light of the evidence we have presented. It is argued that, despite important elements of continuity in respect of gendered occupational structuring in both countries, national variations in both professional and domestic gendered architectures lead to different outcomes as far as the extent and patterns of internal occupational segregation are concerned. Both female and male doctors are currently seeking - with some effect - to resist thepressures of medicine on family life.
Resumo:
Interaction between CD40, a member of the tumor necrosis factor receptor (TNFR) superfamily, and its ligand CD40L, a 39-kDa glycoprotein, is essential for the development of humoral and cellular immune responses. Selective blockade or activation of this pathway provides the ground for the development of new treatments against immunologically based diseases and malignancies. Like other members of the TNF superfamily, CD40L monomers self-assemble around a threefold symmetry axis to form noncovalent homotrimers that can each bind three receptor molecules. Here, we report on the structure-based design of small synthetic molecules with C3 symmetry that can mimic CD40L homotrimers. These molecules interact with CD40, compete with the binding of CD40L to CD40, and reproduce, to a certain extent, the functional properties of the much larger homotrimeric soluble CD40L. Architectures based on rigid C3-symmetric cores may thus represent a general approach to mimicking homotrimers of the TNF superfamily.
Identification of optimal structural connectivity using functional connectivity and neural modeling.
Resumo:
The complex network dynamics that arise from the interaction of the brain's structural and functional architectures give rise to mental function. Theoretical models demonstrate that the structure-function relation is maximal when the global network dynamics operate at a critical point of state transition. In the present work, we used a dynamic mean-field neural model to fit empirical structural connectivity (SC) and functional connectivity (FC) data acquired in humans and macaques and developed a new iterative-fitting algorithm to optimize the SC matrix based on the FC matrix. A dramatic improvement of the fitting of the matrices was obtained with the addition of a small number of anatomical links, particularly cross-hemispheric connections, and reweighting of existing connections. We suggest that the notion of a critical working point, where the structure-function interplay is maximal, may provide a new way to link behavior and cognition, and a new perspective to understand recovery of function in clinical conditions.
Resumo:
Abstract This PhD thesis addresses the issue of alleviating the burden of developing ad hoc applications. Such applications have the particularity of running on mobile devices, communicating in a peer-to-peer manner and implement some proximity-based semantics. A typical example of such application can be a radar application where users see their avatar as well as the avatars of their friends on a map on their mobile phone. Such application become increasingly popular with the advent of the latest generation of mobile smart phones with their impressive computational power, their peer-to-peer communication capabilities and their location detection technology. Unfortunately, the existing programming support for such applications is limited, hence the need to address this issue in order to alleviate their development burden. This thesis specifically tackles this problem by providing several tools for application development support. First, it provides the location-based publish/subscribe service (LPSS), a communication abstraction, which elegantly captures recurrent communication issues and thus allows to dramatically reduce the code complexity. LPSS is implemented in a modular manner in order to be able to target two different network architectures. One pragmatic implementation is aimed at mainstream infrastructure-based mobile networks, where mobile devices can communicate through fixed antennas. The other fully decentralized implementation targets emerging mobile ad hoc networks (MANETs), where no fixed infrastructure is available and communication can only occur in a peer-to-peer fashion. For each of these architectures, various implementation strategies tailored for different application scenarios that can be parametrized at deployment time. Second, this thesis provides two location-based message diffusion protocols, namely 6Shot broadcast and 6Shot multicast, specifically aimed at MANETs and fine tuned to be used as building blocks for LPSS. Finally this thesis proposes Phomo, a phone motion testing tool that allows to test proximity semantics of ad hoc applications without having to move around with mobile devices. These different developing support tools have been packaged in a coherent middleware framework called Pervaho.