964 resultados para Conventional Medicine, Linguistic Code, Organ Transplants, Cellular Memory, Imagina- tion.
Resumo:
Avalanche photodiodes operated in the Geiger mode present very high intrinsic gain and fast time response, which make the sensor an ideal option for those applications in which detectors with high sensitivity and velocity are required. Moreover, they are compatible with conventional CMOS technologies, allowing sensor and front-end electronics integration within the pixel cell. Despite these excellent qualities, the photodiode suffers from high intrinsic noise, which degrades the performance of the detector and increases the memory area to store the total amount of information generated. In this work, a new front-end circuit that allows low reverse bias overvoltage sensor operation to reduce the noise in Geiger mode avalanche photodiode pixel detectors is presented. The proposed front-end circuit also enables to operate the sensor in the gated acquisition mode to further reduce the noise. Experimental characterization of the fabricated pixel with the conventional HV-AMS 0.35µm technology is also presented in this article.
Resumo:
Our understanding of the pathogenesis of organ‐specific autoinflammation has been restricted by limited access to the target organs. Peripheral blood, however, as a preferred transportation route for immune cells, provides a window to assess the entire immune system throughout the body. Transcriptional profiling with RNA stabilizing blood collection tubes reflects in vivo expression profiles at the time the blood is drawn, allowing detection of the disease activity in different samples or within the same sample over time. The main objective of this Ph.D. study was to apply gene‐expression microarrays in the characterization of peripheral blood transcriptional profiles in patients with autoimmune diseases. To achieve this goal a custom cDNA microarray targeted for gene‐expression profiling of human immune system was designed and produced. Sample collection and preparation was then optimized to allow gene‐expression profiling from whole‐blood samples. To overcome challenges resulting from minute amounts of sample material, RNA amplification was successfully applied to study pregnancy related immunosuppression in patients with multiple sclerosis (MS). Furthermore, similar sample preparation was applied to characterize longitudinal genome‐wide expression profiles in children with type 1 diabetes (T1D) associated autoantibodies and eventually clinical T1D. Blood transcriptome analyses, using both the ImmunoChip cDNA microarray with targeted probe selection and genome‐wide Affymetrix U133 Plus 2.0 oligonucleotide array, enabled monitoring of autoimmune activity. Novel disease related genes and general autoimmune signatures were identified. Notably, down‐regulation of the HLA class Ib molecules in peripheral blood was associated with disease activity in both MS and T1D. Taken together, these studies demonstrate the potential of peripheral blood transcriptional profiling in biomedical research and diagnostics. Imbalances in peripheral blood transcriptional activity may reveal dynamic changes that are relevant for the disease but might be completely missed in conventional cross‐sectional studies.
Resumo:
Memristive computing refers to the utilization of the memristor, the fourth fundamental passive circuit element, in computational tasks. The existence of the memristor was theoretically predicted in 1971 by Leon O. Chua, but experimentally validated only in 2008 by HP Labs. A memristor is essentially a nonvolatile nanoscale programmable resistor — indeed, memory resistor — whose resistance, or memristance to be precise, is changed by applying a voltage across, or current through, the device. Memristive computing is a new area of research, and many of its fundamental questions still remain open. For example, it is yet unclear which applications would benefit the most from the inherent nonlinear dynamics of memristors. In any case, these dynamics should be exploited to allow memristors to perform computation in a natural way instead of attempting to emulate existing technologies such as CMOS logic. Examples of such methods of computation presented in this thesis are memristive stateful logic operations, memristive multiplication based on the translinear principle, and the exploitation of nonlinear dynamics to construct chaotic memristive circuits. This thesis considers memristive computing at various levels of abstraction. The first part of the thesis analyses the physical properties and the current-voltage behaviour of a single device. The middle part presents memristor programming methods, and describes microcircuits for logic and analog operations. The final chapters discuss memristive computing in largescale applications. In particular, cellular neural networks, and associative memory architectures are proposed as applications that significantly benefit from memristive implementation. The work presents several new results on memristor modeling and programming, memristive logic, analog arithmetic operations on memristors, and applications of memristors. The main conclusion of this thesis is that memristive computing will be advantageous in large-scale, highly parallel mixed-mode processing architectures. This can be justified by the following two arguments. First, since processing can be performed directly within memristive memory architectures, the required circuitry, processing time, and possibly also power consumption can be reduced compared to a conventional CMOS implementation. Second, intrachip communication can be naturally implemented by a memristive crossbar structure.
Resumo:
Can crowdsourcing solutions serve many masters? Can they be beneficial for both, for the layman or native speakers of minority languages on the one hand and serious linguistic research on the other? How did an infrastructure that was designed to support linguistics turn out to be a solution for raising awareness of native languages? Since 2012 the National Library of Finland has been developing the Digitisation Project for Kindred Languages, in which the key objective is to support a culture of openness and interaction in linguistic research, but also to promote crowdsourcing as a tool for participation of the language community in research. In the course of the project, over 1,200 monographs and nearly 111,000 pages of newspapers in Finno-Ugric languages will be digitised and made available in the Fenno-Ugrica digital collection. This material was published in the Soviet Union in the 1920s and 1930s, and users have had only sporadic access to the material. The publication of open-access and searchable materials from this period is a goldmine for researchers. Historians, social scientists and laymen with an interest in specific local publications can now find text materials pertinent to their studies. The linguistically-oriented population can also find writings to delight them: (1) lexical items specific to a given publication, and (2) orthographically-documented specifics of phonetics. In addition to the open access collection, we developed an open source code OCR editor that enables the editing of machine-encoded text for the benefit of linguistic research. This tool was necessary since these rare and peripheral prints often include already archaic characters, which are neglected by modern OCR software developers but belong to the historical context of kindred languages, and are thus an essential part of the linguistic heritage. When modelling the OCR editor, it was essential to consider both the needs of researchers and the capabilities of lay citizens, and to have them participate in the planning and execution of the project from the very beginning. By implementing the feedback iteratively from both groups, it was possible to transform the requested changes as tools for research that not only supported the work of linguistics but also encouraged the citizen scientists to face the challenge and work with the crowdsourcing tools for the benefit of research. This presentation will not only deal with the technical aspects, developments and achievements of the infrastructure but will highlight the way in which user groups, researchers and lay citizens were engaged in a process as an active and communicative group of users and how their contributions were made to mutual benefit.
Resumo:
Presentation at Open Repositories 2014, Helsinki, Finland, June 9-13, 2014
Resumo:
This article is a transcription of an electronic symposium in which some active researchers were invited by the Brazilian Society for Neuroscience and Behavior (SBNeC) to discuss the last decade's advances in neurobiology of learning and memory. The way different parts of the brain are recruited during the storage of different kinds of memory (e.g., short-term vs long-term memory, declarative vs procedural memory) and even the property of these divisions were discussed. It was pointed out that the brain does not really store memories, but stores traces of information that are later used to create memories, not always expressing a completely veridical picture of the past experienced reality. To perform this process different parts of the brain act as important nodes of the neural network that encode, store and retrieve the information that will be used to create memories. Some of the brain regions are recognizably active during the activation of short-term working memory (e.g., prefrontal cortex), or the storage of information retrieved as long-term explicit memories (e.g., hippocampus and related cortical areas) or the modulation of the storage of memories related to emotional events (e.g., amygdala). This does not mean that there is a separate neural structure completely supporting the storage of each kind of memory but means that these memories critically depend on the functioning of these neural structures. The current view is that there is no sense in talking about hippocampus-based or amygdala-based memory since this implies that there is a one-to-one correspondence. The present question to be solved is how systems interact in memory. The pertinence of attributing a critical role to cellular processes like synaptic tagging and protein kinase A activation to explain the memory storage processes at the cellular level was also discussed.
Resumo:
At the present time, protein folding is an extremely active field of research including aspects of biology, chemistry, biochemistry, computer science and physics. The fundamental principles have practical applications in the exploitation of the advances in genome research, in the understanding of different pathologies and in the design of novel proteins with special functions. Although the detailed mechanisms of folding are not completely known, significant advances have been made in the understanding of this complex process through both experimental and theoretical approaches. In this review, the evolution of concepts from Anfinsen's postulate to the "new view" emphasizing the concept of the energy landscape of folding is presented. The main rules of protein folding have been established from in vitro experiments. It has been long accepted that the in vitro refolding process is a good model for understanding the mechanisms by which a nascent polypeptide chain reaches its native conformation in the cellular environment. Indeed, many denatured proteins, even those whose disulfide bridges have been disrupted, are able to refold spontaneously. Although this assumption was challenged by the discovery of molecular chaperones, from the amount of both structural and functional information now available, it has been clearly established that the main rules of protein folding deduced from in vitro experiments are also valid in the cellular environment. This modern view of protein folding permits a better understanding of the aggregation processes that play a role in several pathologies, including those induced by prions and Alzheimer's disease. Drug design and de novo protein design with the aim of creating proteins with novel functions by application of protein folding rules are making significant progress and offer perspectives for practical applications in the development of pharmaceuticals and medical diagnostics.
Resumo:
Escherichia coli K-12 (pEGFPluxABCDEAmp) (E. coli-lux), constitutively emitting bioluminescence (BL), was constructed and its BL emitting properties tested in different growth and killing conditions. The BL emission directly correlated with the number of viable E. coli-lux cells, and when subjected to the antimicrobial agent, the diminishment of the BL signal was linked directly to the number of killed bacterial cells. The method provided a very convenient application, especially when compared to conventional plate counting assays. This novel real-time based method was utilized in both immunological and toxicological assessments. The parameters such as the activation phase, the lytic phase and the capacity of the killing of the serum complement system were specified not only in humans but also in other species. E. coli-lux was also successfully used to study the antimicrobial activities of insect haemolymph. The mechanisms of neutrophil activity, like that of a myeloperoxidase (MPO)-H2O2-halide system, were studied using the E. coli-lux approach. The fundamental role of MPO was challenged, since during the actual killing in described circumstances in phagolysosome the MPO system was inactivated and chlorination halted. The toxicological test system, assessing indoor air total toxicity, particularly suitable for suspected mold damages, was designed based on the E. coli-lux method. Susceptibility to the vast number of various toxins, both pure chemicals and dust samples from the buildings and extracts from molds, were investigated. The E. coli-lux application was found to possess high sensitivity and specificity attributes. Alongside the analysis system, the sampling kit for indoor dust was engineered based on the swipe stick and the container. The combination of practical specimen collector and convenient analysis system provided accurate toxic data from the dust sample within hours. Neutrophils are good indicators of the pathophysiological state of the individual, and they can be utilized as a toxicological probe due to their ability to emit chemiluminescence (CL). Neutrophils can either be used as probe cells, directly exposed to the agent studied, or they can act as indicators of the whole biological system exposed to the agent. Human neutrophils were exposed to the same toxins as tested with the E. coli-lux system and measured as luminol amplified CL emission. The influence of the toxins on the individuals was investigated by exposing rats with moniliniformin, the mycotoxin commonly present in Finnish grains. The activity of the rat neutrophils was found to decrease significantly during the 28 days of exposure.
Resumo:
Human cytomegalovirus (CMV) infection is common in most people but nearly asymptomatic in immunocompetent individuals. After primary infection the virus persists throughout life in a latent form in a variety of tissues, particularly in precursor cells of the monocytic lineage. CMV reinfection and occurrence of disease are associated with immunosuppressive conditions. Solid organ and bone marrow transplant patients are at high risk for CMV disease as they undergo immunosuppression. Antiviral treatment is effective in controlling viremia, but 10-15% of infected patients can experience CMV disease by the time the drug is withdrawn. In addition, long-term antiviral treatment leads to bone marrow ablation and renal toxicity. Furthermore, control of chronic CMV infection in transplant recipients appears to be dependent on the proper recovery of cellular immunity. Recent advances in the characterization of T-cell functions and identification of distinct functional signatures of T-cell viral responses have opened new perspectives for monitoring transplant individuals at risk of developing CMV disease.
Resumo:
The National Library of Finland is implementing the Digitization Project of Kindred Languages in 2012–16. Within the project we will digitize materials in the Uralic languages as well as develop tools to support linguistic research and citizen science. Through this project, researchers will gain access to new corpora 329 and to which all users will have open access regardless of their place of residence. Our objective is to make sure that the new corpora are made available for the open and interactive use of both the academic community and the language societies as a whole. The project seeks to digitize and publish approximately 1200 monograph titles and more than 100 newspapers titles in various Uralic languages. The digitization will be completed by the early of 2015, when the Fenno-Ugrica collection would contain around 200 000 pages of editable text. The researchers cannot spend so much time with the material that they could retrieve a satisfactory amount of edited words, so the participation of a crowd in editing work is needed. Often the targets in crowdsourcing have been split into several microtasks that do not require any special skills from the anonymous people, a faceless crowd. This way of crowdsourcing may produce quantitative results, but from the research’s point of view, there is a danger that the needs of linguistic research are not necessarily met. Also, the number of pages is too high to deal with. The remarkable downside is the lack of shared goal or social affinity. There is no reward in traditional methods of crowdsourcing. Nichesourcing is a specific type of crowdsourcing where tasks are distributed amongst a small crowd of citizen scientists (communities). Although communities provide smaller pools to draw resources, their specific richness in skill is suited for the complex tasks with high-quality product expectations found in nichesourcing. Communities have purpose, identity and their regular interactions engenders social trust and reputation. These communities can correspond to research more precisely. Instead of repetitive and rather trivial tasks, we are trying to utilize the knowledge and skills of citizen scientists to provide qualitative results. Some selection must be made, since we are not aiming to correct all 200,000 pages which we have digitized, but give such assignments to citizen scientists that would precisely fill the gaps in linguistic research. A typical task would editing and collecting the words in such fields of vocabularies, where the researchers do require more information. For instance, there’s a lack of Hill Mari words in anatomy. We have digitized the books in medicine and we could try to track the words related to human organs by assigning the citizen scientists to edit and collect words with OCR editor. From the nichesourcing’s perspective, it is essential that the altruism plays a central role, when the language communities involve. Upon the nichesourcing, our goal is to reach a certain level of interplay, where the language communities would benefit on the results. For instance, the corrected words in Ingrian will be added onto the online dictionary, which is made freely available for the public and the society can benefit too. This objective of interplay can be understood as an aspiration to support the endangered languages and the maintenance of lingual diversity, but also as a servant of “two masters”, the research and the society.
Resumo:
y+LAT1 is a transmembrane protein that, together with the 4F2hc cell surface antigen, forms a transporter for cationic amino acids in the basolateral plasma membrane of epithelial cells. It is mainly expressed in the kidney and small intestine, and to a lesser extent in other tissues, such as the placenta and immunoactive cells. Mutations in y+LAT1 lead to a defect of the y+LAT1/4F2hc transporter, which impairs intestinal absorbance and renal reabsorbance of lysine, arginine and ornithine, causing lysinuric protein intolerance (LPI), a rare, recessively inherited aminoaciduria with severe multi-organ complications. This thesis examines the consequences of the LPI-causing mutations on two levels, the transporter structure and the Finnish patients’ gene expression profiles. Using fluorescence resonance energy transfer (FRET) confocal microscopy, optimised for this work, the subunit dimerisation was discovered to be a primary phenomenon occurring regardless of mutations in y+LAT1. In flow cytometric and confocal microscopic FRET analyses, the y+LAT1 molecules exhibit a strong tendency for homodimerisation both in the presence and absence of 4F2hc, suggesting a heterotetramer for the transporter’s functional form. Gene expression analysis of the Finnish patients, clinically variable but homogenic for the LPI-causing mutation in SLC7A7, revealed 926 differentially-expressed genes and a disturbance of the amino acid homeostasis affecting several transporters. However, despite the expression changes in individual patients, no overall compensatory effect of y+LAT2, the sister y+L transporter, was detected. The functional annotations of the altered genes included biological processes such as inflammatory response, immune system processes and apoptosis, indicating a strong immunological involvement for LPI.
Resumo:
Traditional psychometric theory and practice classify people according to broad ability dimensions but do not examine how these mental processes occur. Hunt and Lansman (1975) proposed a 'distributed memory' model of cognitive processes with emphasis on how to describe individual differences based on the assumption that each individual possesses the same components. It is in the quality of these components ~hat individual differences arise. Carroll (1974) expands Hunt's model to include a production system (after Newell and Simon, 1973) and a response system. He developed a framework of factor analytic (FA) factors for : the purpose of describing how individual differences may arise from them. This scheme is to be used in the analysis of psychometric tes ts . Recent advances in the field of information processing are examined and include. 1) Hunt's development of differences between subjects designated as high or low verbal , 2) Miller's pursuit of the magic number seven, plus or minus two, 3) Ferguson's examination of transfer and abilities and, 4) Brown's discoveries concerning strategy teaching and retardates . In order to examine possible sources of individual differences arising from cognitive tasks, traditional psychometric tests were searched for a suitable perceptual task which could be varied slightly and administered to gauge learning effects produced by controlling independent variables. It also had to be suitable for analysis using Carroll's f ramework . The Coding Task (a symbol substitution test) found i n the Performance Scale of the WISe was chosen. Two experiments were devised to test the following hypotheses. 1) High verbals should be able to complete significantly more items on the Symbol Substitution Task than low verbals (Hunt, Lansman, 1975). 2) Having previous practice on a task, where strategies involved in the task may be identified, increases the amount of output on a similar task (Carroll, 1974). J) There should be a sUbstantial decrease in the amount of output as the load on STM is increased (Miller, 1956) . 4) Repeated measures should produce an increase in output over trials and where individual differences in previously acquired abilities are involved, these should differentiate individuals over trials (Ferguson, 1956). S) Teaching slow learners a rehearsal strategy would improve their learning such that their learning would resemble that of normals on the ,:same task. (Brown, 1974). In the first experiment 60 subjects were d.ivided·into high and low verbal, further divided randomly into a practice group and nonpractice group. Five subjects in each group were assigned randomly to work on a five, seven and nine digit code throughout the experiment. The practice group was given three trials of two minutes each on the practice code (designed to eliminate transfer effects due to symbol similarity) and then three trials of two minutes each on the actual SST task . The nonpractice group was given three trials of two minutes each on the same actual SST task . Results were analyzed using a four-way analysis of variance . In the second experiment 18 slow learners were divided randomly into two groups. one group receiving a planned strategy practioe, the other receiving random practice. Both groups worked on the actual code to be used later in the actual task. Within each group subjects were randomly assigned to work on a five, seven or nine digit code throughout. Both practice and actual tests consisted on three trials of two minutes each. Results were analyzed using a three-way analysis of variance . It was found in t he first experiment that 1) high or low verbal ability by itself did not produce significantly different results. However, when in interaction with the other independent variables, a difference in performance was noted . 2) The previous practice variable was significant over all segments of the experiment. Those who received previo.us practice were able to score significantly higher than those without it. J) Increasing the size of the load on STM severely restricts performance. 4) The effect of repeated trials proved to be beneficial. Generally, gains were made on each successive trial within each group. S) In the second experiment, slow learners who were allowed to practice randomly performed better on the actual task than subjeots who were taught the code by means of a planned strategy. Upon analysis using the Carroll scheme, individual differences were noted in the ability to develop strategies of storing, searching and retrieving items from STM, and in adopting necessary rehearsals for retention in STM. While these strategies may benef it some it was found that for others they may be harmful . Temporal aspects and perceptual speed were also found to be sources of variance within individuals . Generally it was found that the largest single factor i nfluencing learning on this task was the repeated measures . What e~ables gains to be made, varies with individuals . There are environmental factors, specific abilities, strategy development, previous learning, amount of load on STM , perceptual and temporal parameters which influence learning and these have serious implications for educational programs .
Resumo:
L’approche psycholinguistique suggère que la rétention à court terme verbale et le langage dépendent de mécanismes communs. Elle prédit que les caractéristiques linguistiques des items verbaux (e.g. phonologiques, lexicales, sémantiques) influencent le rappel immédiat (1) et que la contribution des niveaux de représentations linguistiques dépend du contexte de rappel, certaines conditions expérimentales (e.g. format des stimuli) favorisant l’utilisation de codes spécifiques (2). Ces prédictions sont évaluées par le biais de deux études empiriques réalisées auprès d’une patiente cérébrolésée qui présente une atteinte du traitement phonologique (I.R.) et de participants contrôles. Une première étude (Article 1) teste l’impact des modes de présentation et de rappel sur les effets de similarité phonologique et de catégorie sémantique de listes de mots. Une seconde étude (Article 2) évalue la contribution du code orthographique en mémoire à court terme (MCT) verbale en testant l’effet de la densité du voisinage orthographique des mots sur le rappel sériel immédiat de mots présentés visuellement. Compte tenu du rôle déterminant du code phonologique en MCT et du type d’atteinte de I.R., des effets linguistiques distincts étaient attendus chez elle et chez les contrôles. Selon le contexte de rappel, des effets sémantiques (Article 1) et orthographiques (Article 2) plus importants étaient prédits chez I.R. et des effets phonologiques plus marqués étaient attendus chez les participants contrôles. Chez I.R., le rappel est influencé par les caractéristiques sémantiques et orthographiques des mots, mais peu par leurs caractéristiques phonologiques et le contexte de rappel module l’utilisation de différents niveaux de représentations linguistiques. Chez les contrôles, une contribution relativement plus stable des représentations phonologiques est observée. Les données appuient une approche psycholinguistique qui postule que des mécanismes communs régissent la rétention à court terme verbale et le langage. Les implications théoriques et cliniques des résultats sont discutées en regard de modèles psycholinguistiques actuels.
Resumo:
Le génie tissulaire est un domaine interdisciplinaire qui applique les principes du génie et des sciences de la vie (notamment la science des cellules souches) dans le but de régénérer et réparer les tissus et organes lésés. En d'autres mots, plutôt que de remplacer les tissus et les organes, on les répare. La recherche en génie tissulaire est considérable et les ambitions sont grandes, notamment celle de mettre fm aux listes d'attente de dons d'organes. Le génie tissulaire a déjà commencé à livrer des produits thérapeutiques pour des applications simples, notamment la peau et le cartilage. Les questions sur la façon de réglementer les produits thérapeutiques qui sont issus du génie tissulaire sont soulevées à chaque nouveau produit. À ce jour, ces questions ont reçu peu d'attention comparativement aux questions éthiques associées aux recherches avec les cellules souches et les risques qu'engendrent les produits biologiques. Il est donc important d'examiner si le cadre normatif qui entoure la mise en marché des produits issus du génie tissulaire est approprié puisque de tels produits sont déjà disponibles sur le marché et plusieurs autres sont en voie de l'être. Notre analyse révèle que le cadre canadien actuel n'est pas approprié et le moment d'une reforme est arrivé. Les États-Unis et l'Union européenne ont chacun des approches particulières qui sont instructives. Nous avons entrepris une revue des textes réglementaires qui encadrent la mise en marché des produits issus du génie tissulaire au Canada, aux États-Unis et dans l'Union européenne et formulons quelques suggestions de réforme.
Resumo:
Nous proposons une approche basée sur la formulation interactive des requêtes. Notre approche sert à faciliter des tâches d’analyse et de compréhension du code source. Dans cette approche, l’analyste utilise un ensemble de filtres de base (linguistique, structurel, quantitatif, et filtre d’interactivité) pour définir des requêtes complexes. Ces requêtes sont construites à l’aide d’un processus interactif et itératif, où des filtres de base sont choisis et exécutés, et leurs résultats sont visualisés, changés et combinés en utilisant des opérateurs prédéfinis. Nous avons évalués notre approche par l’implantation des récentes contributions en détection de défauts de conception ainsi que la localisation de fonctionnalités dans le code. Nos résultats montrent que, en plus d’être générique, notre approche aide à la mise en œuvre des solutions existantes implémentées par des outils automatiques.