503 resultados para algorithmic skeletons
Resumo:
A systematic assessment of global neural network connectivity through direct electrophysiological assays has remained technically infeasible, even in simpler systems like dissociated neuronal cultures. We introduce an improved algorithmic approach based on Transfer Entropy to reconstruct structural connectivity from network activity monitored through calcium imaging. We focus in this study on the inference of excitatory synaptic links. Based on information theory, our method requires no prior assumptions on the statistics of neuronal firing and neuronal connections. The performance of our algorithm is benchmarked on surrogate time series of calcium fluorescence generated by the simulated dynamics of a network with known ground-truth topology. We find that the functional network topology revealed by Transfer Entropy depends qualitatively on the time-dependent dynamic state of the network (bursting or non-bursting). Thus by conditioning with respect to the global mean activity, we improve the performance of our method. This allows us to focus the analysis to specific dynamical regimes of the network in which the inferred functional connectivity is shaped by monosynaptic excitatory connections, rather than by collective synchrony. Our method can discriminate between actual causal influences between neurons and spurious non-causal correlations due to light scattering artifacts, which inherently affect the quality of fluorescence imaging. Compared to other reconstruction strategies such as cross-correlation or Granger Causality methods, our method based on improved Transfer Entropy is remarkably more accurate. In particular, it provides a good estimation of the excitatory network clustering coefficient, allowing for discrimination between weakly and strongly clustered topologies. Finally, we demonstrate the applicability of our method to analyses of real recordings of in vitro disinhibited cortical cultures where we suggest that excitatory connections are characterized by an elevated level of clustering compared to a random graph (although not extreme) and can be markedly non-local.
Resumo:
The major processes discussed below are protein turnover (degradation and synthesis), degradation into urea, or conversion into glucose (gluconeogenesis, Figure 1). Daily protein turnover is a dynamic process characterized by a double flux of amino acids: the amino acids released by endogenous (body) protein breakdown can be reutilized and reconverted to protein synthesis, with very little loss. Daily rates of protein turnover in humans (300 to 400 g per day) are largely in excess of the level of protein intake (50 to 80 g per day). A fast growing rate, as in premature babies or in children recovering from malnutrition, leads to a high protein turnover rate and a high protein and energy requirement. Protein metabolism (synthesis and breakdown) is an energy-requiring process, dependent upon endogenous ATP supply. The contribution made by whole-body protein turnover to the resting metabolic rate is important: it represents about 20 % in adults and more in growing children. Metabolism of proteins cannot be disconnected from that of energy since energy balance influences net protein utilization, and since protein intake has an important effect on postprandial thermogenesis - more important than that of fats or carbohydrates. The metabolic need for amino acids is essentially to maintain stores of endogenous tissue proteins within an appropriate range, allowing protein homeostasis to be maintained. Thanks to a dynamic, free amino acid pool, this demand for amino acids can be continuously supplied. The size of the free amino acid pool remains limited and is regulated within narrow limits. The supply of amino acids to cover physiological needs can be derived from 3 sources: 1. Exogenous proteins that release amino acids after digestion and absorption 2. Tissue protein breakdown during protein turnover 3. De novo synthesis, including amino acids (as well as ammonia) derived from the process of urea salvage, following hydrolysis and microflora metabolism in the hind gut. When protein intake surpasses the physiological needs of amino acids, the excess amino acids are disposed of by three major processes: 1. Increased oxidation, with terminal end products such as CO₂ and ammonia 2. Enhanced ureagenesis i. e. synthesis of urea linked to protein oxidation eliminates the nitrogen radical 3. Gluconeogenesis, i. e. de novo synthesis of glucose. Most of the amino groups of the excess amino acids are converted into urea through the urea cycle, whereas their carbon skeletons are transformed into other intermediates, mostly glucose. This is one of the mechanisms, essential for life, developed by the body to maintain blood glucose within a narrow range, (i. e. glucose homeostasis). It includes the process of gluconeogenesis, i. e. de novo synthesis of glucose from non-glycogenic precursors; in particular certain specific amino acids (for example, alanine), as well as glycerol (derived from fat breakdown) and lactate (derived from muscles). The gluconeogenetic pathway progressively takes over when the supply of glucose from exogenous or endogenous sources (glycogenolysis) becomes insufficient. This process becomes vital during periods of metabolic stress, such as starvation.
Resumo:
Résumé Suite aux recentes avancées technologiques, les archives d'images digitales ont connu une croissance qualitative et quantitative sans précédent. Malgré les énormes possibilités qu'elles offrent, ces avancées posent de nouvelles questions quant au traitement des masses de données saisies. Cette question est à la base de cette Thèse: les problèmes de traitement d'information digitale à très haute résolution spatiale et/ou spectrale y sont considérés en recourant à des approches d'apprentissage statistique, les méthodes à noyau. Cette Thèse étudie des problèmes de classification d'images, c'est à dire de catégorisation de pixels en un nombre réduit de classes refletant les propriétés spectrales et contextuelles des objets qu'elles représentent. L'accent est mis sur l'efficience des algorithmes, ainsi que sur leur simplicité, de manière à augmenter leur potentiel d'implementation pour les utilisateurs. De plus, le défi de cette Thèse est de rester proche des problèmes concrets des utilisateurs d'images satellite sans pour autant perdre de vue l'intéret des méthodes proposées pour le milieu du machine learning dont elles sont issues. En ce sens, ce travail joue la carte de la transdisciplinarité en maintenant un lien fort entre les deux sciences dans tous les développements proposés. Quatre modèles sont proposés: le premier répond au problème de la haute dimensionalité et de la redondance des données par un modèle optimisant les performances en classification en s'adaptant aux particularités de l'image. Ceci est rendu possible par un système de ranking des variables (les bandes) qui est optimisé en même temps que le modèle de base: ce faisant, seules les variables importantes pour résoudre le problème sont utilisées par le classifieur. Le manque d'information étiquétée et l'incertitude quant à sa pertinence pour le problème sont à la source des deux modèles suivants, basés respectivement sur l'apprentissage actif et les méthodes semi-supervisées: le premier permet d'améliorer la qualité d'un ensemble d'entraînement par interaction directe entre l'utilisateur et la machine, alors que le deuxième utilise les pixels non étiquetés pour améliorer la description des données disponibles et la robustesse du modèle. Enfin, le dernier modèle proposé considère la question plus théorique de la structure entre les outputs: l'intègration de cette source d'information, jusqu'à présent jamais considérée en télédétection, ouvre des nouveaux défis de recherche. Advanced kernel methods for remote sensing image classification Devis Tuia Institut de Géomatique et d'Analyse du Risque September 2009 Abstract The technical developments in recent years have brought the quantity and quality of digital information to an unprecedented level, as enormous archives of satellite images are available to the users. However, even if these advances open more and more possibilities in the use of digital imagery, they also rise several problems of storage and treatment. The latter is considered in this Thesis: the processing of very high spatial and spectral resolution images is treated with approaches based on data-driven algorithms relying on kernel methods. In particular, the problem of image classification, i.e. the categorization of the image's pixels into a reduced number of classes reflecting spectral and contextual properties, is studied through the different models presented. The accent is put on algorithmic efficiency and the simplicity of the approaches proposed, to avoid too complex models that would not be used by users. The major challenge of the Thesis is to remain close to concrete remote sensing problems, without losing the methodological interest from the machine learning viewpoint: in this sense, this work aims at building a bridge between the machine learning and remote sensing communities and all the models proposed have been developed keeping in mind the need for such a synergy. Four models are proposed: first, an adaptive model learning the relevant image features has been proposed to solve the problem of high dimensionality and collinearity of the image features. This model provides automatically an accurate classifier and a ranking of the relevance of the single features. The scarcity and unreliability of labeled. information were the common root of the second and third models proposed: when confronted to such problems, the user can either construct the labeled set iteratively by direct interaction with the machine or use the unlabeled data to increase robustness and quality of the description of data. Both solutions have been explored resulting into two methodological contributions, based respectively on active learning and semisupervised learning. Finally, the more theoretical issue of structured outputs has been considered in the last model, which, by integrating outputs similarity into a model, opens new challenges and opportunities for remote sensing image processing.
Resumo:
RÉSUMÉ Cette thèse porte sur le développement de méthodes algorithmiques pour découvrir automatiquement la structure morphologique des mots d'un corpus. On considère en particulier le cas des langues s'approchant du type introflexionnel, comme l'arabe ou l'hébreu. La tradition linguistique décrit la morphologie de ces langues en termes d'unités discontinues : les racines consonantiques et les schèmes vocaliques. Ce genre de structure constitue un défi pour les systèmes actuels d'apprentissage automatique, qui opèrent généralement avec des unités continues. La stratégie adoptée ici consiste à traiter le problème comme une séquence de deux sous-problèmes. Le premier est d'ordre phonologique : il s'agit de diviser les symboles (phonèmes, lettres) du corpus en deux groupes correspondant autant que possible aux consonnes et voyelles phonétiques. Le second est de nature morphologique et repose sur les résultats du premier : il s'agit d'établir l'inventaire des racines et schèmes du corpus et de déterminer leurs règles de combinaison. On examine la portée et les limites d'une approche basée sur deux hypothèses : (i) la distinction entre consonnes et voyelles peut être inférée sur la base de leur tendance à alterner dans la chaîne parlée; (ii) les racines et les schèmes peuvent être identifiés respectivement aux séquences de consonnes et voyelles découvertes précédemment. L'algorithme proposé utilise une méthode purement distributionnelle pour partitionner les symboles du corpus. Puis il applique des principes analogiques pour identifier un ensemble de candidats sérieux au titre de racine ou de schème, et pour élargir progressivement cet ensemble. Cette extension est soumise à une procédure d'évaluation basée sur le principe de la longueur de description minimale, dans- l'esprit de LINGUISTICA (Goldsmith, 2001). L'algorithme est implémenté sous la forme d'un programme informatique nommé ARABICA, et évalué sur un corpus de noms arabes, du point de vue de sa capacité à décrire le système du pluriel. Cette étude montre que des structures linguistiques complexes peuvent être découvertes en ne faisant qu'un minimum d'hypothèses a priori sur les phénomènes considérés. Elle illustre la synergie possible entre des mécanismes d'apprentissage portant sur des niveaux de description linguistique distincts, et cherche à déterminer quand et pourquoi cette coopération échoue. Elle conclut que la tension entre l'universalité de la distinction consonnes-voyelles et la spécificité de la structuration racine-schème est cruciale pour expliquer les forces et les faiblesses d'une telle approche. ABSTRACT This dissertation is concerned with the development of algorithmic methods for the unsupervised learning of natural language morphology, using a symbolically transcribed wordlist. It focuses on the case of languages approaching the introflectional type, such as Arabic or Hebrew. The morphology of such languages is traditionally described in terms of discontinuous units: consonantal roots and vocalic patterns. Inferring this kind of structure is a challenging task for current unsupervised learning systems, which generally operate with continuous units. In this study, the problem of learning root-and-pattern morphology is divided into a phonological and a morphological subproblem. The phonological component of the analysis seeks to partition the symbols of a corpus (phonemes, letters) into two subsets that correspond well with the phonetic definition of consonants and vowels; building around this result, the morphological component attempts to establish the list of roots and patterns in the corpus, and to infer the rules that govern their combinations. We assess the extent to which this can be done on the basis of two hypotheses: (i) the distinction between consonants and vowels can be learned by observing their tendency to alternate in speech; (ii) roots and patterns can be identified as sequences of the previously discovered consonants and vowels respectively. The proposed algorithm uses a purely distributional method for partitioning symbols. Then it applies analogical principles to identify a preliminary set of reliable roots and patterns, and gradually enlarge it. This extension process is guided by an evaluation procedure based on the minimum description length principle, in line with the approach to morphological learning embodied in LINGUISTICA (Goldsmith, 2001). The algorithm is implemented as a computer program named ARABICA; it is evaluated with regard to its ability to account for the system of plural formation in a corpus of Arabic nouns. This thesis shows that complex linguistic structures can be discovered without recourse to a rich set of a priori hypotheses about the phenomena under consideration. It illustrates the possible synergy between learning mechanisms operating at distinct levels of linguistic description, and attempts to determine where and why such a cooperation fails. It concludes that the tension between the universality of the consonant-vowel distinction and the specificity of root-and-pattern structure is crucial for understanding the advantages and weaknesses of this approach.
Resumo:
BACKGROUND: Since the emergence of diffusion tensor imaging, a lot of work has been done to better understand the properties of diffusion MRI tractography. However, the validation of the reconstructed fiber connections remains problematic in many respects. For example, it is difficult to assess whether a connection is the result of the diffusion coherence contrast itself or the simple result of other uncontrolled parameters like for example: noise, brain geometry and algorithmic characteristics. METHODOLOGY/PRINCIPAL FINDINGS: In this work, we propose a method to estimate the respective contributions of diffusion coherence versus other effects to a tractography result by comparing data sets with and without diffusion coherence contrast. We use this methodology to assign a confidence level to every gray matter to gray matter connection and add this new information directly in the connectivity matrix. CONCLUSIONS/SIGNIFICANCE: Our results demonstrate that whereas we can have a strong confidence in mid- and long-range connections obtained by a tractography experiment, it is difficult to distinguish between short connections traced due to diffusion coherence contrast from those produced by chance due to the other uncontrolled factors of the tractography methodology.
Resumo:
PURPOSE: Effective cancer treatment generally requires combination therapy. The combination of external beam therapy (XRT) with radiopharmaceutical therapy (RPT) requires accurate three-dimensional dose calculations to avoid toxicity and evaluate efficacy. We have developed and tested a treatment planning method, using the patient-specific three-dimensional dosimetry package 3D-RD, for sequentially combined RPT/XRT therapy designed to limit toxicity to organs at risk. METHODS AND MATERIALS: The biologic effective dose (BED) was used to translate voxelized RPT absorbed dose (D(RPT)) values into a normalized total dose (or equivalent 2-Gy-fraction XRT absorbed dose), NTD(RPT) map. The BED was calculated numerically using an algorithmic approach, which enabled a more accurate calculation of BED and NTD(RPT). A treatment plan from the combined Samarium-153 and external beam was designed that would deliver a tumoricidal dose while delivering no more than 50 Gy of NTD(sum) to the spinal cord of a patient with a paraspinal tumor. RESULTS: The average voxel NTD(RPT) to tumor from RPT was 22.6 Gy (range, 1-85 Gy); the maximum spinal cord voxel NTD(RPT) from RPT was 6.8 Gy. The combined therapy NTD(sum) to tumor was 71.5 Gy (range, 40-135 Gy) for a maximum voxel spinal cord NTD(sum) equal to the maximum tolerated dose of 50 Gy. CONCLUSIONS: A method that enables real-time treatment planning of combined RPT-XRT has been developed. By implementing a more generalized conversion between the dose values from the two modalities and an activity-based treatment of partial volume effects, the reliability of combination therapy treatment planning has been expanded.
Resumo:
BACKGROUND: The proportion of surgery performed as a day case varies greatly between countries. Low rates suggest a large growth potential in many countries. Measuring the potential development of one day surgery should be grounded on a comprehensive list of eligible procedures, based on a priori criteria, independent of local practices. We propose an algorithmic method, using only routinely available hospital data to identify surgical hospitalizations that could have been performed as one day treatment. METHODS: Moving inpatient surgery to one day surgery was considered feasible if at least one surgical intervention was eligible for one day surgery and if none of the following criteria were present: intervention or affection requiring an inpatient stay, patient transferred or died, and length of stay greater than four days. The eligibility of a procedure to be treated as a day case was mainly established on three a priori criteria: surgical access (endoscopic or not), the invasiveness of the procedure and the size of the operated organ. Few overrides of these criteria occurred when procedures were associated with risk of immediate complications, slow physiological recovery or pain treatment requiring hospital infrastructure. The algorithm was applied to a random sample of one million inpatient US stays and more than 600 thousand Swiss inpatient stays, in the year 2002. RESULTS: The validity of our method was demonstrated by the few discrepancies between the a priori criteria based list of eligible procedures, and a state list used for reimbursement purposes, the low proportion of hospitalizations eligible for one day care found in the US sample (4.9 versus 19.4% in the Swiss sample), and the distribution of the elective procedures found eligible in Swiss hospitals, well supported by the literature. There were large variations of the proportion of candidates for one day surgery among elective surgical hospitalizations between Swiss hospitals (3 to 45.3%). CONCLUSION: The proposed approach allows the monitoring of the proportion of inpatient stay candidates for one day surgery. It could be used for infrastructure planning, resources negotiation and the surveillance of appropriate resource utilization.
Resumo:
INTRODUCTION: Acute painful diabetic neuropathy (APDN) is a distinctive diabetic polyneuropathy and consists of two subtypes: treatment-induced neuropathy (TIN) and diabetic neuropathic cachexia (DNC). The characteristics of APDN are (1.) the small-fibre involvement, (2.) occurrence paradoxically after short-term achievement of good glycaemia control, (3.) intense pain sensation and (4.) eventual recovery. In the face of current recommendations to achieve quickly glycaemic targets, it appears necessary to recognise and understand this neuropathy. METHODS AND RESULTS: Over 2009 to 2012, we reported four cases of APDN. Four patients (three males and one female) were identified and had a mean age at onset of TIN of 47.7 years (±6.99 years). Mean baseline HbA1c was 14.2% (±1.42) and 7.0% (±3.60) after treatment. Mean estimated time to correct HbA1c was 4.5 months (±3.82 months). Three patients presented with a mean time to symptom resolution of 12.7 months (±1.15 months). One patient had an initial normal electroneuromyogram (ENMG) despite the presence of neuropathic symptoms, and a second abnormal ENMG showing axonal and myelin neuropathy. One patient had a peroneal nerve biopsy showing loss of large myelinated fibres as well as unmyelinated fibres, and signs of microangiopathy. CONCLUSIONS: According to the current recommendations of promptly achieving glycaemic targets, it appears necessary to recognise and understand this neuropathy. Based on our observations and data from the literature we propose an algorithmic approach for differential diagnosis and therapeutic management of APDN patients.
Resumo:
Legumes such as alfalfa (Medicago sativa L.) are vital N2-fixing crops accounting for a global N2 fixation of ~35 MtNyear-1. Although enzymatic and molecular mechanisms of nodule N2 fixation are now well documented, some uncertainty remains as to whether N2 fixation is strictly coupled with photosynthetic carbon fixation. That is, the metabolic origin and redistribution of carbon skeletons used to incorporate nitrogen are still relatively undefined. Here, we conducted isotopic labelling with both 15N2 and 13C-depleted CO2 on alfalfa plants grown under controlled conditions and took advantage of isotope ratio mass spectrometry to investigate the relationship between carbon and nitrogen turn-over in respired CO2, total organic matter and amino acids. Our results indicate that CO2 evolved by respiration had an isotopic composition similar to that in organic matter regardless of the organ considered, suggesting that the turn-over of respiratory pools strictly followed photosynthetic input. However, carbon turn-over was nearly three times greater than N turn-over in total organic matter, suggesting that new organic material synthesised was less N-rich than pre-existing organic material (due to progressive nitrogen elemental dilution) or that N remobilisation occurred to sustain growth. This pattern was not consistent with the total commitment into free amino acids where the input of new C and N appeared to be stoichiometric. The labelling pattern in Asn was complex, with contrasted C and N commitments in different organs, suggesting that neosynthesis and redistribution of new Asn molecules required metabolic remobilisation. We conclude that the production of new organic material during alfalfa growth depends on both C and N remobilisation in different organs. At the plant level, this remobilisation is complicated by allocation and metabolism in the different organs. Additional keywords: carbon exchange, carbon isotopes, nitrogen fixation, nitrogen 15 isotope
Resumo:
Tämän diplomityön tavoitteena oli tutkia kohinan poistoa spektrikuvista käyttäen pehmeitä morfologisia suodattimia. Työssä painotettiin impulssimaisen kohinan suodattamista. Suodattimien toimintaa arvioitiin numeerisesti keskimääräisen itseisarvovirheen, neliövirheen sekä signaali-kohinasuhteen avulla ja visuaalisesti tarkastelemalla suodatettuja kuvia sekä niiden yksittäisiä spektritasoja. Käytettyjä suodatusmenetelmiä olivat suodatus kuvapisteittäin spektrin suunnassa, suodatus koko spektrissä sekä kuutiomenetelmä ja komponenteittainen suodatus. Suodatettavat kuvat sisälsivät joko suola ja pippuri- tai bittivirhekohinaa. Parhaimmat suodatustulokset sekä numeeristen virhekriteerien että visuaalisen tarkastelun perusteella saatiin komponenteittaisella sekä kuvapisteittäisellä menetelmällä. Työssä käytetyt menetelmät on esitetty algoritmimuodossa. Suodatinalgoritmien toteutukset ja suodatuskokeet tehtiin Matlab-ohjelmistolla.
Resumo:
El projecte està basat en la creació d'una aplicació per dispositius mòbils android i que fent servir l'ús del micròfon capturi el so que genera l'usuari i pugui determinar si s'està respirant i en quin punt de la respiració es troba l'usuari. S'ha dut a terme una filosofia de disseny orientada a l'usuari (DCU) de manera que el primer pas ha sigut realitzar un prototip i un 'sketch'. A continuació, s'han realitzat 10 aplicacions test i en cadascuna d'elles s'ha ampliat la funcionalitat fins a arribar a obtenir una aplicació base que s'aproxima al disseny inicial generat per mitjà del prototip. El més important dels dissenys algorísmics que s'han realitzat per la aplicació es la capacitat de processar el senyal en temps real, ja que fins i tot s'ha pogut aplicar la transformada ràpida de Fourier (FFT) en temps real sense que el rendiment de l'aplicació es veies afectat. Això ha sigut possible gràcies al disseny del processament amb doble buffer i amb un fil d'execució dedicat independent del fil principal d'execució del programa 'UI Thread'
Resumo:
The degree of fusion at the anterior aspect of the sacral vertebrae has been scored in 242 male and female skeletons from the Lisbon documented collection, ranging in age from 16 to 59 years old. Statistical tests indicate a sex difference towards earlier fusion in young females compared with young males, as well as a clear association between degree of fusion and age. Similar results have been found in documented skeletal samples from Coimbra and Sassari, and the recommendations stated by these authors regarding age estimation have been positively tested in the Lisbon collection. Although more research from geographically diverse samples is required, a general picture of the pattern of sacral fusion and its associations with age and sex is emerging. We also provide a practical example of the usefulness of the sacrum in age estimation in a forensic setting, a mass grave from the Spanish Civil War. It is concluded that the scoring of the degree of fusion of the sacral vertebrae, specially of S1-2, can be a simple tool for assigning skeletons to broad age groups, and it should be implemented as another resource for age estimation in the study of human skeletal remains.
Resumo:
The genetic impact associated to the Neolithic spread in Europe has been widely debated over the last 20 years. Within this context, ancient DNA studies have provided a more reliable picture by directly analyzing the protagonist populations at different regions in Europe. However, the lack of available data from the original Near Eastern farmers has limited the achieved conclusions, preventing the formulation of continental models of Neolithic expansion. Here we address this issue by presenting mitochondrial DNA data of the original Near-Eastern Neolithic communities with the aim of providing the adequate background for the interpretation of Neolithic genetic data from European samples. Sixty-three skeletons from the Pre Pottery Neolithic B (PPNB) sites of Tell Halula, Tell Ramad and Dja'de El Mughara dating between 8,700-6,600 cal. B.C. were analyzed, and 15 validated mitochondrial DNA profiles were recovered. In order to estimate the demographic contribution of the first farmers to both Central European and Western Mediterranean Neolithic cultures, haplotype and haplogroup diversities in the PPNB sample were compared using phylogeographic and population genetic analyses to available ancient DNA data from human remains belonging to the Linearbandkeramik-Alföldi Vonaldiszes Kerámia and Cardial/Epicardial cultures. We also searched for possible signatures of the original Neolithic expansion over the modern Near Eastern and South European genetic pools, and tried to infer possible routes of expansion by comparing the obtained results to a database of 60 modern populations from both regions. Comparisons performed among the 3 ancient datasets allowed us to identify K and N-derived mitochondrial DNA haplogroups as potential markers of the Neolithic expansion, whose genetic signature would have reached both the Iberian coasts and the Central European plain. Moreover, the observed genetic affinities between the PPNB samples and the modern populations of Cyprus and Crete seem to suggest that the Neolithic was first introduced into Europe through pioneer seafaring colonization.
Concerted changes in N and C primary metabolism in alfalfa (Medicago sativa) under water restriction
Resumo:
Although the mechanisms of nodule N2 fixation in legumes are now well documented, some uncertainty remains on the metabolic consequences of water deficit. In most cases, little consideration is given to other organs and, therefore, the coordinated changes in metabolism in leaves, roots, and nodules are not well known. Here, the effect of water restriction on exclusively N2-fixing alfalfa (Medicago sativa L.) plants was investigated, and proteomic, metabolomic, and physiological analyses were carried out. It is shown that the inhibition of nitrogenase activity caused by water restriction was accompanied by concerted alterations in metabolic pathways in nodules, leaves, and roots. The data suggest that nodule metabolism and metabolic exchange between plant organs nearly reached homeostasis in asparagine synthesis and partitioning, as well as the N demand from leaves. Typically, there was (i) a stimulation of the anaplerotic pathway to sustain the provision of C skeletons for amino acid (e.g. glutamate and proline) synthesis; (ii) re-allocation of glycolytic products to alanine and serine/glycine; and (iii) subtle changes in redox metabolites suggesting the implication of a slight oxidative stress. Furthermore, water restriction caused little change in both photosynthetic efficiency and respiratory cost of N2 fixation by nodules. In other words, the results suggest that under water stress, nodule metabolism follows a compromise between physiological imperatives (N demand, oxidative stress) and the lower input to sustain catabolism.
Resumo:
The sambaquis are archaeological sites with remains of pre-historical Brazilian civilizations. They look like small hills containing different kinds of shells, animal and fish bones, small artifacts and even human skeletons. Since the sambaqui sites in the Rio de Janeiro state are younger than 6000 years, the applicability of CO2 absorption on Carbo-Sorb® and 14C determination by counting on a low background liquid scintillation counter was tested. The International Atomic Energy Agency standard reference material IAEA-C2 was used in order to standardize the method. Nine sambaqui samples from five different archaeological sites found in the Rio de Janeiro state were analyzed and 14C ages between 2100 and 3600 years BP were observed. The same samples were sent to the 14C Laboratory of the Centro de Energia Nuclear na Agricultura (CENA/USP) where similar results were obtained.