974 resultados para Content processing
Resumo:
We present a case study of the redesign of the organizational presentation and content of the Virtual Library website at the Universitat Oberta de Catalunya (Open University of Catalonia, UOC), based on a user-centered design strategy. The aim of the redesign was to provide users with more intuitive, usable and understandable content (textual content, resources and services) by implementing criteria of customization, transparency and proximity. The study also presents a selection of best practices for applying these criteria to the design of other library websites.
Resumo:
The effect of dissolved nutrients on growth, nutrient content and uptake rates of Chaetomorpha linum in a Mediterranean coastal lagoon (Tancada, Ebro delta, NE Spain) was studied in laboratory experiments. Water was enriched with distinct forms of nitrogen, such as nitrate or ammonium and phosphorus. Enrichment with N, P or with both nutrients resulted in a significant increase in the tissue content of these nutrients. N-enrichment was followed by an increase in chlorophyll content after 4 days of treatment, although the difference was only significant when nitrate was added without P. P-enrichment had no significant effect on chlorophyll content. In all the treatments an increase in biomass was obseved after 10 days. This increase was higher in the N+P treatments. In all the treatments the uptake rate was significantly higher when nutrients were added than in control jars. The uptake rate of N, as ammonium, and P were significantly higher when they were added alone while that of N as nitrate was higher in the N+P treatment. In the P-enriched cultures, the final P-content of macroalgal tissues was ten-fold that of the initial tissue concentrations, thereby indicating luxury P-uptake. Moreover, at the end of the incubation the N:P ratio increased to 80, showing that P rather than N was the limiting factor for C. linum in the Tancada lagoon. The relatively high availability of N is related to the N inputs from rice fields that surround the lagoon and to P binding in sediments.
Resumo:
Tämän kannattavuustutkimuksen lähtökohtana oli se, että Yhtyneet Sahat Oy:n Kaukaan sahalla ja Luumäen jatkojalostuslaitoksella haluttiin selvittää pellettitehtaan kannattavuus nykyisessä markkinatilanteessa. Tämä työon luonteeltaan teknis-taloudellinen selvitys eli ns. feasibility study. Pelletöintiprosessi on tekniikaltaan yksinkertainen eikä edellytä korkea teknologian laitteita. Toimiala on maailmanlaajuisesti varsin uusi. Suomessa pellettimarkkinat ovat vielä pienet ja kehittymättömät, mutta kasvua on viime vuosina tapahtunut. Valtaosa kotimaan tuotannosta menee vientiin. Investoinnin laskentaprosessissa saadut tuotannon alkuarvot sekä kustannusrakenteen määrittelyt ovat perustana varsinaisille kannattavuuslaskelmille. Laskelmista on selvitetty investointeihin liittyvät yleisimmät taloudelliset tunnusluvut ja herkimpiä muuttujia on tutkittu ja pohdittu herkkyysanalyysiä apuna käyttäen.
Resumo:
We present a case study of the redesign of the organizational presentation and content of the Virtual Library website at the Universitat Oberta de Catalunya (Open University of Catalonia, UOC), based on a user-centered design strategy. The aim of the redesign was to provide users with more intuitive, usable and understandable content (textual content, resources and services) by implementing criteria of customization, transparency and proximity. The study also presents a selection of best practices for applying these criteria to the design of other library websites.
Resumo:
INTRODUCTION: Deficits in decision making (DM) are commonly associated with prefrontal cortical damage, but may occur with multiple sclerosis (MS). There are no data concerning the impact of MS on tasks evaluating DM under explicit risk, where different emotional and cognitive components can be distinguished. METHODS: We assessed 72 relapsing-remitting MS (RRMS) patients with mild to moderate disease and 38 healthy controls in two DM tasks involving risk with explicit rules: (1) The Wheel of Fortune (WOF), which probes the anticipated affects of decisions outcomes on future choices; and (2) The Cambridge Gamble Task (CGT) which measures risk taking. Participants also underwent a neuropsychological and emotional assessment, and skin conductance responses (SCRs) were recorded. RESULTS: In the WOF, RRMS patients showed deficits in integrating positive counterfactual information (p<0.005) and greater risk aversion (p<0.001). They reported less negative affect than controls (disappointment: p = 0.007; regret: p = 0.01), although their implicit emotional reactions as measured by post-choice SCRs did not differ. In the CGT, RRMS patients differed from controls in quality of DM (p = 0.01) and deliberation time (p = 0.0002), the latter difference being correlated with attention scores. Such changes did not result in overall decreases in performance (total gains). CONCLUSIONS: The quality of DM under risk was modified by MS in both tasks. The reduction in the expression of disappointment coexisted with an increased risk aversion in the WOF and alexithymia features. These concomitant emotional alterations may have implications for better understanding the components of explicit DM and for the clinical support of MS patients.
Resumo:
Bakery products such as biscuits, cookies, and pastries represent a good medium for iron fortification in food products, since they are consumed by a large proportion of the population at risk of developing iron deficiency anemia, mainly children. The drawback, however, is that iron fortification can promote oxidation. To assess the extent of this, palm oil added with heme iron and different antioxidants was used as a model for evaluating the oxidative stability of some bakery products, such as baked goods containing chocolate. The palm oil samples were heated at 220°C for 10 min to mimic the conditions found during a typical baking processing. The selected antioxidants were a free radical scavenger (tocopherol extract (TE), 0 and 500 mg/kg), an oxygen scavenger (ascorbyl palmitate (AP), 0 and 500 mg/kg), and a chelating agent (citric acid (CA), 0 and 300 mg/kg). These antioxidants were combined using a factorial design and were compared to a control sample, which was not supplemented with antioxidants. Primary (peroxide value and lipid hydroperoxide content) and secondary oxidation parameters (p-anisidine value, p-AnV) were monitored over a period of 200 days in storage at room temperature. The combination of AP and CA was the most effective treatment in delaying the onset of oxidation. TE was not effective in preventing oxidation. The p-AnV did not increase during the storage period, indicating that this oxidation marker was not suitable for monitoring oxidation in this model.
Resumo:
Bakery products such as biscuits, cookies, and pastries represent a good medium for iron fortification in food products, since they are consumed by a large proportion of the population at risk of developing iron deficiency anemia, mainly children. The drawback, however, is that iron fortification can promote oxidation. To assess the extent of this, palm oil added with heme iron and different antioxidants was used as a model for evaluating the oxidative stability of some bakery products, such as baked goods containing chocolate. The palm oil samples were heated at 220°C for 10 min to mimic the conditions found during a typical baking processing. The selected antioxidants were a free radical scavenger (tocopherol extract (TE), 0 and 500 mg/kg), an oxygen scavenger (ascorbyl palmitate (AP), 0 and 500 mg/kg), and a chelating agent (citric acid (CA), 0 and 300 mg/kg). These antioxidants were combined using a factorial design and were compared to a control sample, which was not supplemented with antioxidants. Primary (peroxide value and lipid hydroperoxide content) and secondary oxidation parameters (p-anisidine value, p-AnV) were monitored over a period of 200 days in storage at room temperature. The combination of AP and CA was the most effective treatment in delaying the onset of oxidation. TE was not effective in preventing oxidation. The p-AnV did not increase during the storage period, indicating that this oxidation marker was not suitable for monitoring oxidation in this model.
Resumo:
Increasing evidence suggests that working memory and perceptual processes are dynamically interrelated due to modulating activity in overlapping brain networks. However, the direct influence of working memory on the spatio-temporal brain dynamics of behaviorally relevant intervening information remains unclear. To investigate this issue, subjects performed a visual proximity grid perception task under three different visual-spatial working memory (VSWM) load conditions. VSWM load was manipulated by asking subjects to memorize the spatial locations of 6 or 3 disks. The grid was always presented between the encoding and recognition of the disk pattern. As a baseline condition, grid stimuli were presented without a VSWM context. VSWM load altered both perceptual performance and neural networks active during intervening grid encoding. Participants performed faster and more accurately on a challenging perceptual task under high VSWM load as compared to the low load and the baseline condition. Visual evoked potential (VEP) analyses identified changes in the configuration of the underlying sources in one particular period occurring 160-190 ms post-stimulus onset. Source analyses further showed an occipito-parietal down-regulation concurrent to the increased involvement of temporal and frontal resources in the high VSWM context. Together, these data suggest that cognitive control mechanisms supporting working memory may selectively enhance concurrent visual processing related to an independent goal. More broadly, our findings are in line with theoretical models implicating the engagement of frontal regions in synchronizing and optimizing mnemonic and perceptual resources towards multiple goals.
Resumo:
Un système efficace de sismique tridimensionnelle (3-D) haute-résolution adapté à des cibles lacustres de petite échelle a été développé. Dans le Lac Léman, près de la ville de Lausanne, en Suisse, des investigations récentes en deux dimension (2-D) ont mis en évidence une zone de faille complexe qui a été choisie pour tester notre système. Les structures observées incluent une couche mince (<40 m) de sédiments quaternaires sub-horizontaux, discordants sur des couches tertiaires de molasse pentées vers le sud-est. On observe aussi la zone de faille de « La Paudèze » qui sépare les unités de la Molasse du Plateau de la Molasse Subalpine. Deux campagnes 3-D complètes, d?environ d?un kilomètre carré, ont été réalisées sur ce site de test. La campagne pilote (campagne I), effectuée en 1999 pendant 8 jours, a couvert 80 profils en utilisant une seule flûte. Pendant la campagne II (9 jours en 2001), le nouveau système trois-flûtes, bien paramétrés pour notre objectif, a permis l?acquisition de données de très haute qualité sur 180 lignes CMP. Les améliorations principales incluent un système de navigation et de déclenchement de tirs grâce à un nouveau logiciel. Celui-ci comprend un contrôle qualité de la navigation du bateau en temps réel utilisant un GPS différentiel (dGPS) à bord et une station de référence près du bord du lac. De cette façon, les tirs peuvent être déclenchés tous les 5 mètres avec une erreur maximale non-cumulative de 25 centimètres. Tandis que pour la campagne I la position des récepteurs de la flûte 48-traces a dû être déduite à partir des positions du bateau, pour la campagne II elle ont pu être calculées précisément (erreur <20 cm) grâce aux trois antennes dGPS supplémentaires placées sur des flotteurs attachés à l?extrémité de chaque flûte 24-traces. Il est maintenant possible de déterminer la dérive éventuelle de l?extrémité des flûtes (75 m) causée par des courants latéraux ou de petites variations de trajet du bateau. De plus, la construction de deux bras télescopiques maintenant les trois flûtes à une distance de 7.5 m les uns des autres, qui est la même distance que celle entre les lignes naviguées de la campagne II. En combinaison avec un espacement de récepteurs de 2.5 m, la dimension de chaque «bin» de données 3-D de la campagne II est de 1.25 m en ligne et 3.75 m latéralement. L?espacement plus grand en direction « in-line » par rapport à la direction «cross-line» est justifié par l?orientation structurale de la zone de faille perpendiculaire à la direction «in-line». L?incertitude sur la navigation et le positionnement pendant la campagne I et le «binning» imprécis qui en résulte, se retrouve dans les données sous forme d?une certaine discontinuité des réflecteurs. L?utilisation d?un canon à air à doublechambre (qui permet d?atténuer l?effet bulle) a pu réduire l?aliasing observé dans les sections migrées en 3-D. Celui-ci était dû à la combinaison du contenu relativement haute fréquence (<2000 Hz) du canon à eau (utilisé à 140 bars et à 0.3 m de profondeur) et d?un pas d?échantillonnage latéral insuffisant. Le Mini G.I 15/15 a été utilisé à 80 bars et à 1 m de profondeur, est mieux adapté à la complexité de la cible, une zone faillée ayant des réflecteurs pentés jusqu?à 30°. Bien que ses fréquences ne dépassent pas les 650 Hz, cette source combine une pénétration du signal non-aliasé jusqu?à 300 m dans le sol (par rapport au 145 m pour le canon à eau) pour une résolution verticale maximale de 1.1 m. Tandis que la campagne I a été acquise par groupes de plusieurs lignes de directions alternées, l?optimisation du temps d?acquisition du nouveau système à trois flûtes permet l?acquisition en géométrie parallèle, ce qui est préférable lorsqu?on utilise une configuration asymétrique (une source et un dispositif de récepteurs). Si on ne procède pas ainsi, les stacks sont différents selon la direction. Toutefois, la configuration de flûtes, plus courtes que pour la compagne I, a réduit la couverture nominale, la ramenant de 12 à 6. Une séquence classique de traitement 3-D a été adaptée à l?échantillonnage à haute fréquence et elle a été complétée par deux programmes qui transforment le format non-conventionnel de nos données de navigation en un format standard de l?industrie. Dans l?ordre, le traitement comprend l?incorporation de la géométrie, suivi de l?édition des traces, de l?harmonisation des «bins» (pour compenser l?inhomogénéité de la couverture due à la dérive du bateau et de la flûte), de la correction de la divergence sphérique, du filtrage passe-bande, de l?analyse de vitesse, de la correction DMO en 3-D, du stack et enfin de la migration 3-D en temps. D?analyses de vitesse détaillées ont été effectuées sur les données de couverture 12, une ligne sur deux et tous les 50 CMP, soit un nombre total de 600 spectres de semblance. Selon cette analyse, les vitesses d?intervalles varient de 1450-1650 m/s dans les sédiments non-consolidés et de 1650-3000 m/s dans les sédiments consolidés. Le fait que l?on puisse interpréter plusieurs horizons et surfaces de faille dans le cube, montre le potentiel de cette technique pour une interprétation tectonique et géologique à petite échelle en trois dimensions. On distingue cinq faciès sismiques principaux et leurs géométries 3-D détaillées sur des sections verticales et horizontales: les sédiments lacustres (Holocène), les sédiments glacio-lacustres (Pléistocène), la Molasse du Plateau, la Molasse Subalpine de la zone de faille (chevauchement) et la Molasse Subalpine au sud de cette zone. Les couches de la Molasse du Plateau et de la Molasse Subalpine ont respectivement un pendage de ~8° et ~20°. La zone de faille comprend de nombreuses structures très déformées de pendage d?environ 30°. Des tests préliminaires avec un algorithme de migration 3-D en profondeur avant sommation et à amplitudes préservées démontrent que la qualité excellente des données de la campagne II permet l?application de telles techniques à des campagnes haute-résolution. La méthode de sismique marine 3-D était utilisée jusqu?à présent quasi-exclusivement par l?industrie pétrolière. Son adaptation à une échelle plus petite géographiquement mais aussi financièrement a ouvert la voie d?appliquer cette technique à des objectifs d?environnement et du génie civil.<br/><br/>An efficient high-resolution three-dimensional (3-D) seismic reflection system for small-scale targets in lacustrine settings was developed. In Lake Geneva, near the city of Lausanne, Switzerland, past high-resolution two-dimensional (2-D) investigations revealed a complex fault zone (the Paudèze thrust zone), which was subsequently chosen for testing our system. Observed structures include a thin (<40 m) layer of subhorizontal Quaternary sediments that unconformably overlie southeast-dipping Tertiary Molasse beds and the Paudèze thrust zone, which separates Plateau and Subalpine Molasse units. Two complete 3-D surveys have been conducted over this same test site, covering an area of about 1 km2. In 1999, a pilot survey (Survey I), comprising 80 profiles, was carried out in 8 days with a single-streamer configuration. In 2001, a second survey (Survey II) used a newly developed three-streamer system with optimized design parameters, which provided an exceptionally high-quality data set of 180 common midpoint (CMP) lines in 9 days. The main improvements include a navigation and shot-triggering system with in-house navigation software that automatically fires the gun in combination with real-time control on navigation quality using differential GPS (dGPS) onboard and a reference base near the lake shore. Shots were triggered at 5-m intervals with a maximum non-cumulative error of 25 cm. Whereas the single 48-channel streamer system of Survey I requires extrapolation of receiver positions from the boat position, for Survey II they could be accurately calculated (error <20 cm) with the aid of three additional dGPS antennas mounted on rafts attached to the end of each of the 24- channel streamers. Towed at a distance of 75 m behind the vessel, they allow the determination of feathering due to cross-line currents or small course variations. Furthermore, two retractable booms hold the three streamers at a distance of 7.5 m from each other, which is the same distance as the sail line interval for Survey I. With a receiver spacing of 2.5 m, the bin dimension of the 3-D data of Survey II is 1.25 m in in-line direction and 3.75 m in cross-line direction. The greater cross-line versus in-line spacing is justified by the known structural trend of the fault zone perpendicular to the in-line direction. The data from Survey I showed some reflection discontinuity as a result of insufficiently accurate navigation and positioning and subsequent binning errors. Observed aliasing in the 3-D migration was due to insufficient lateral sampling combined with the relatively high frequency (<2000 Hz) content of the water gun source (operated at 140 bars and 0.3 m depth). These results motivated the use of a double-chamber bubble-canceling air gun for Survey II. A 15 / 15 Mini G.I air gun operated at 80 bars and 1 m depth, proved to be better adapted for imaging the complexly faulted target area, which has reflectors dipping up to 30°. Although its frequencies do not exceed 650 Hz, this air gun combines a penetration of non-aliased signal to depths of 300 m below the water bottom (versus 145 m for the water gun) with a maximum vertical resolution of 1.1 m. While Survey I was shot in patches of alternating directions, the optimized surveying time of the new threestreamer system allowed acquisition in parallel geometry, which is preferable when using an asymmetric configuration (single source and receiver array). Otherwise, resulting stacks are different for the opposite directions. However, the shorter streamer configuration of Survey II reduced the nominal fold from 12 to 6. A 3-D conventional processing flow was adapted to the high sampling rates and was complemented by two computer programs that format the unconventional navigation data to industry standards. Processing included trace editing, geometry assignment, bin harmonization (to compensate for uneven fold due to boat/streamer drift), spherical divergence correction, bandpass filtering, velocity analysis, 3-D DMO correction, stack and 3-D time migration. A detailed semblance velocity analysis was performed on the 12-fold data set for every second in-line and every 50th CMP, i.e. on a total of 600 spectra. According to this velocity analysis, interval velocities range from 1450-1650 m/s for the unconsolidated sediments and from 1650-3000 m/s for the consolidated sediments. Delineation of several horizons and fault surfaces reveal the potential for small-scale geologic and tectonic interpretation in three dimensions. Five major seismic facies and their detailed 3-D geometries can be distinguished in vertical and horizontal sections: lacustrine sediments (Holocene) , glaciolacustrine sediments (Pleistocene), Plateau Molasse, Subalpine Molasse and its thrust fault zone. Dips of beds within Plateau and Subalpine Molasse are ~8° and ~20°, respectively. Within the fault zone, many highly deformed structures with dips around 30° are visible. Preliminary tests with 3-D preserved-amplitude prestack depth migration demonstrate that the excellent data quality of Survey II allows application of such sophisticated techniques even to high-resolution seismic surveys. In general, the adaptation of the 3-D marine seismic reflection method, which to date has almost exclusively been used by the oil exploration industry, to a smaller geographical as well as financial scale has helped pave the way for applying this technique to environmental and engineering purposes.<br/><br/>La sismique réflexion est une méthode d?investigation du sous-sol avec un très grand pouvoir de résolution. Elle consiste à envoyer des vibrations dans le sol et à recueillir les ondes qui se réfléchissent sur les discontinuités géologiques à différentes profondeurs et remontent ensuite à la surface où elles sont enregistrées. Les signaux ainsi recueillis donnent non seulement des informations sur la nature des couches en présence et leur géométrie, mais ils permettent aussi de faire une interprétation géologique du sous-sol. Par exemple, dans le cas de roches sédimentaires, les profils de sismique réflexion permettent de déterminer leur mode de dépôt, leurs éventuelles déformations ou cassures et donc leur histoire tectonique. La sismique réflexion est la méthode principale de l?exploration pétrolière. Pendant longtemps on a réalisé des profils de sismique réflexion le long de profils qui fournissent une image du sous-sol en deux dimensions. Les images ainsi obtenues ne sont que partiellement exactes, puisqu?elles ne tiennent pas compte de l?aspect tridimensionnel des structures géologiques. Depuis quelques dizaines d?années, la sismique en trois dimensions (3-D) a apporté un souffle nouveau à l?étude du sous-sol. Si elle est aujourd?hui parfaitement maîtrisée pour l?imagerie des grandes structures géologiques tant dans le domaine terrestre que le domaine océanique, son adaptation à l?échelle lacustre ou fluviale n?a encore fait l?objet que de rares études. Ce travail de thèse a consisté à développer un système d?acquisition sismique similaire à celui utilisé pour la prospection pétrolière en mer, mais adapté aux lacs. Il est donc de dimension moindre, de mise en oeuvre plus légère et surtout d?une résolution des images finales beaucoup plus élevée. Alors que l?industrie pétrolière se limite souvent à une résolution de l?ordre de la dizaine de mètres, l?instrument qui a été mis au point dans le cadre de ce travail permet de voir des détails de l?ordre du mètre. Le nouveau système repose sur la possibilité d?enregistrer simultanément les réflexions sismiques sur trois câbles sismiques (ou flûtes) de 24 traces chacun. Pour obtenir des données 3-D, il est essentiel de positionner les instruments sur l?eau (source et récepteurs des ondes sismiques) avec une grande précision. Un logiciel a été spécialement développé pour le contrôle de la navigation et le déclenchement des tirs de la source sismique en utilisant des récepteurs GPS différentiel (dGPS) sur le bateau et à l?extrémité de chaque flûte. Ceci permet de positionner les instruments avec une précision de l?ordre de 20 cm. Pour tester notre système, nous avons choisi une zone sur le Lac Léman, près de la ville de Lausanne, où passe la faille de « La Paudèze » qui sépare les unités de la Molasse du Plateau et de la Molasse Subalpine. Deux campagnes de mesures de sismique 3-D y ont été réalisées sur une zone d?environ 1 km2. Les enregistrements sismiques ont ensuite été traités pour les transformer en images interprétables. Nous avons appliqué une séquence de traitement 3-D spécialement adaptée à nos données, notamment en ce qui concerne le positionnement. Après traitement, les données font apparaître différents faciès sismiques principaux correspondant notamment aux sédiments lacustres (Holocène), aux sédiments glacio-lacustres (Pléistocène), à la Molasse du Plateau, à la Molasse Subalpine de la zone de faille et la Molasse Subalpine au sud de cette zone. La géométrie 3-D détaillée des failles est visible sur les sections sismiques verticales et horizontales. L?excellente qualité des données et l?interprétation de plusieurs horizons et surfaces de faille montrent le potentiel de cette technique pour les investigations à petite échelle en trois dimensions ce qui ouvre des voies à son application dans les domaines de l?environnement et du génie civil.
Resumo:
Résumé: Le développement rapide de nouvelles technologies comme l'imagerie médicale a permis l'expansion des études sur les fonctions cérébrales. Le rôle principal des études fonctionnelles cérébrales est de comparer l'activation neuronale entre différents individus. Dans ce contexte, la variabilité anatomique de la taille et de la forme du cerveau pose un problème majeur. Les méthodes actuelles permettent les comparaisons interindividuelles par la normalisation des cerveaux en utilisant un cerveau standard. Les cerveaux standards les plus utilisés actuellement sont le cerveau de Talairach et le cerveau de l'Institut Neurologique de Montréal (MNI) (SPM99). Les méthodes de recalage qui utilisent le cerveau de Talairach, ou celui de MNI, ne sont pas suffisamment précises pour superposer les parties plus variables d'un cortex cérébral (p.ex., le néocortex ou la zone perisylvienne), ainsi que les régions qui ont une asymétrie très importante entre les deux hémisphères. Le but de ce projet est d'évaluer une nouvelle technique de traitement d'images basée sur le recalage non-rigide et utilisant les repères anatomiques. Tout d'abord, nous devons identifier et extraire les structures anatomiques (les repères anatomiques) dans le cerveau à déformer et celui de référence. La correspondance entre ces deux jeux de repères nous permet de déterminer en 3D la déformation appropriée. Pour les repères anatomiques, nous utilisons six points de contrôle qui sont situés : un sur le gyrus de Heschl, un sur la zone motrice de la main et le dernier sur la fissure sylvienne, bilatéralement. Evaluation de notre programme de recalage est accomplie sur les images d'IRM et d'IRMf de neuf sujets parmi dix-huit qui ont participés dans une étude précédente de Maeder et al. Le résultat sur les images anatomiques, IRM, montre le déplacement des repères anatomiques du cerveau à déformer à la position des repères anatomiques de cerveau de référence. La distance du cerveau à déformer par rapport au cerveau de référence diminue après le recalage. Le recalage des images fonctionnelles, IRMf, ne montre pas de variation significative. Le petit nombre de repères, six points de contrôle, n'est pas suffisant pour produire les modifications des cartes statistiques. Cette thèse ouvre la voie à une nouvelle technique de recalage du cortex cérébral dont la direction principale est le recalage de plusieurs points représentant un sillon cérébral. Abstract : The fast development of new technologies such as digital medical imaging brought to the expansion of brain functional studies. One of the methodolgical key issue in brain functional studies is to compare neuronal activation between individuals. In this context, the great variability of brain size and shape is a major problem. Current methods allow inter-individual comparisions by means of normalisation of subjects' brains in relation to a standard brain. A largerly used standard brains are the proportional grid of Talairach and Tournoux and the Montreal Neurological Insititute standard brain (SPM99). However, there is a lack of more precise methods for the superposition of more variable portions of the cerebral cortex (e.g, neocrotex and perisyvlian zone) and in brain regions highly asymmetric between the two cerebral hemipsheres (e.g. planum termporale). The aim of this thesis is to evaluate a new image processing technique based on non-linear model-based registration. Contrary to the intensity-based, model-based registration uses spatial and not intensitiy information to fit one image to another. We extract identifiable anatomical features (point landmarks) in both deforming and target images and by their correspondence we determine the appropriate deformation in 3D. As landmarks, we use six control points that are situated: one on the Heschl'y Gyrus, one on the motor hand area, and one on the sylvian fissure, bilaterally. The evaluation of this model-based approach is performed on MRI and fMRI images of nine of eighteen subjects participating in the Maeder et al. study. Results on anatomical, i.e. MRI, images, show the mouvement of the deforming brain control points to the location of the reference brain control points. The distance of the deforming brain to the reference brain is smallest after the registration compared to the distance before the registration. Registration of functional images, i.e fMRI, doesn't show a significant variation. The small number of registration landmarks, i.e. six, is obvious not sufficient to produce significant modification on the fMRI statistical maps. This thesis opens the way to a new computation technique for cortex registration in which the main directions will be improvement of the registation algorithm, using not only one point as landmark, but many points, representing one particular sulcus.
Resumo:
The irrigated agriculture at the São Francisco River Valley, Northeast Brazil, shows an increasing production of grapes for winery. Among the wines produced there the one obtained from Vitis vinifera L., cultivar Syrah, stands out due to its adaptation to the climatic conditions of the region. However, little is known about carbohydrates metabolism of vines cultivated in this region. The objective of this work was to evaluate sugar and starch contents and the invertase activity in vines leaves during two consecutive growing seasons. The experiment was carried out at Embrapa Semi-Árido and at Santa Maria Winery, respectively located in Petrolina and Lagoa Grande, Pernambuco-Brazil. Leaves were collected weekly from January to December of 2003 and assessed for reducing sugars, total soluble sugars and starch contents, as well as for acid (AI) and neutral invertases (NI). The results showed that reducing sugars, total soluble sugars and starch contents increased during fruit maturation and are influenced by temperature, radiation and insolation variations. The second growing season showed higher reducing sugars and total soluble sugars content and lower starch content in the leaves than the first one. AI activity was higher than NI activity and these also varied according to weather conditions. During berries ripening, leaves showed higher sugar content and invertase activity, suggesting a higher sugar metabolism and transport during this phase.
Resumo:
BACKGROUND: Selection for increasing intramuscular fat content would definitively improve the palatability and juiciness of pig meat as well as the sensorial and organoleptic properties of cured products. However, evidences obtained in human and model organisms suggest that high levels of intramuscular fat might alter muscle lipid and carbohydrate metabolism. We have analysed this issue by determining the transcriptomic profiles of Duroc pigs with divergent phenotypes for 13 fatness traits. The strong aptitude of Duroc pigs to have high levels of intramuscular fat makes them a valuable model to analyse the mechanisms that regulate muscle lipid metabolism, an issue with evident implications in the elucidation of the genetic basis of human metabolic diseases such as obesity and insulin resistance. RESULTS: Muscle gene expression profiles of 68 Duroc pigs belonging to two groups (HIGH and LOW) with extreme phenotypes for lipid deposition and composition traits have been analysed. Microarray and quantitative PCR analysis showed that genes related to fatty acid uptake, lipogenesis and triacylglycerol synthesis were upregulated in the muscle tissue of HIGH pigs, which are fatter and have higher amounts of intramuscular fat than their LOW counterparts. Paradoxically, lipolytic genes also showed increased mRNA levels in the HIGH group suggesting the existence of a cycle where triacylglycerols are continuously synthesized and degraded. Several genes related to the insulin-signalling pathway, that is usually impaired in obese humans, were also upregulated. Finally, genes related to antigen-processing and presentation were downregulated in the HIGH group. CONCLUSION: Our data suggest that selection for increasing intramuscular fat content in pigs would lead to a shift but not a disruption of the metabolic homeostasis of muscle cells. Future studies on the post-translational changes affecting protein activity or expression as well as information about protein location within the cell would be needed to to elucidate the effects of lipid deposition on muscle metabolism in pigs.
Resumo:
Wheat yield and grain nitrogen concentration (GNC; mg N/g grain) are frequently negatively correlated. In most growing conditions, this is mainly due to a feedback process between GNC and the number of grains/m2. In Mediterranean conditions, breeders may have produced cultivars with conservative grain set. The present study aimed at clarifying the main physiological determinants of grain nitrogen accumulation (GNA) in Mediterranean wheat and to analyse how breeding has affected them. Five field experiments were carried out in north-eastern Spain in the 2005/06 and 2006/ 07 growing seasons with three cultivars released at different times and an advanced line. Depending on the experiment, source-sink ratios during grain filling were altered by reducing grain number/m2 either through pre-anthesis shading (unshaded control or 0.75 shading only between jointing and anthesis) or by directly trimming the spikes after anthesis and before the onset of the effective grain filling period (un-trimmed control or spikes halved 7–10 days after anthesis). Grain nitrogen content (GN content ; mg N/grain) decreased with the year of release of the genotypes. As the number of grains/m2 was also increased by breeding there was a clear dilution effect on the amount of nitrogen allocated to each grain. However, the increase in GN content in old genotypes did not compensate for the loss in grain nitrogen yield (GNY) due to the lower number of grains/m2. GN content of all genotypes increased (increases ranged from 0.13 to 0.40 mg N/grain, depending on experiment and genotype) in response to the post-anthesis spike trimming or pre-anthesis shading. The degree of source-limitation for GNA increased with the year of release of the genotypes (and thus with increases in grain number/m2) from 0.22 (mean of the four manipulative experiments) in the oldest cultivar to 0.51 (mean of the four manipulative experiments) in the most modern line. It was found that final GN content depended strongly on the source-sink ratio established at anthesis between the number of grains set and the amount of nitrogen absorbed at this stage. Thus, Mediterranean wheat breeding that improved yield through increases in grain number/m2 reduced the GN content by diluting a rather limited source of nitrogen into more grains. This dilution effect produced by breeding was further confirmed by the reversal effect produced by grain number/m2 reductions due to either pre-anthesis shading or post-anthesis spike trimming.
Resumo:
The relationship between yield, carbon isotope discrimination and ash content in mature kernels was examined for a set of 13 barley (Hordeum vulgare) cultivars. Plants were grown under rainfed and well-irrigated conditions in a Mediterranean area. Water deficit caused a decrease in both grain yield and carbon isotope discrimination (Δ). The yield was positively related to Δ and negatively related to ash content, across genotypes within each treatment. However, whereas the correlation between yield and Δ was higher for the set of genotypes under well-irrigated (r=0.70, P<0.01) than under rainfed (r=0.42) conditions, the opposite occurred when yield and ash content were related, ie r=-0.38 under well-irrigated and r=-0.73, (P<0.01) under rainfed conditions. Carbon isotope discrimination and ash content together account for almost 60% of the variation in yield, in both conditions. There was no significant relationship (r=-0.15) between carbon isotope discrimination and ash content in well-irrigated plants, whereas in rainfed plants, this relationship, although significant (r=-0.54, P< 0.05), was weakly negative. The concentration of several mineral elements was measured in the same kernels. The mineral that correlated best with ash content, yield and A, was K. For yield and Δ, although the relationship with K followed the same pattern as the relationhip with ash content, the correlation coefficients were lower. Thus, mineral accumulation in mature kernels seems to be independent of transpiration efficiency. In fact, filling of grains takes place through the phloem pathway. The ash content in kernels is proposed as a complementary criterion, in addition to kernel Δ, to assess genotype differences in barley grain yield under rainfed conditions.