938 resultados para batch processing
Resumo:
INTRODUCTION: Deficits in decision making (DM) are commonly associated with prefrontal cortical damage, but may occur with multiple sclerosis (MS). There are no data concerning the impact of MS on tasks evaluating DM under explicit risk, where different emotional and cognitive components can be distinguished. METHODS: We assessed 72 relapsing-remitting MS (RRMS) patients with mild to moderate disease and 38 healthy controls in two DM tasks involving risk with explicit rules: (1) The Wheel of Fortune (WOF), which probes the anticipated affects of decisions outcomes on future choices; and (2) The Cambridge Gamble Task (CGT) which measures risk taking. Participants also underwent a neuropsychological and emotional assessment, and skin conductance responses (SCRs) were recorded. RESULTS: In the WOF, RRMS patients showed deficits in integrating positive counterfactual information (p<0.005) and greater risk aversion (p<0.001). They reported less negative affect than controls (disappointment: p = 0.007; regret: p = 0.01), although their implicit emotional reactions as measured by post-choice SCRs did not differ. In the CGT, RRMS patients differed from controls in quality of DM (p = 0.01) and deliberation time (p = 0.0002), the latter difference being correlated with attention scores. Such changes did not result in overall decreases in performance (total gains). CONCLUSIONS: The quality of DM under risk was modified by MS in both tasks. The reduction in the expression of disappointment coexisted with an increased risk aversion in the WOF and alexithymia features. These concomitant emotional alterations may have implications for better understanding the components of explicit DM and for the clinical support of MS patients.
Resumo:
Increasing evidence suggests that working memory and perceptual processes are dynamically interrelated due to modulating activity in overlapping brain networks. However, the direct influence of working memory on the spatio-temporal brain dynamics of behaviorally relevant intervening information remains unclear. To investigate this issue, subjects performed a visual proximity grid perception task under three different visual-spatial working memory (VSWM) load conditions. VSWM load was manipulated by asking subjects to memorize the spatial locations of 6 or 3 disks. The grid was always presented between the encoding and recognition of the disk pattern. As a baseline condition, grid stimuli were presented without a VSWM context. VSWM load altered both perceptual performance and neural networks active during intervening grid encoding. Participants performed faster and more accurately on a challenging perceptual task under high VSWM load as compared to the low load and the baseline condition. Visual evoked potential (VEP) analyses identified changes in the configuration of the underlying sources in one particular period occurring 160-190 ms post-stimulus onset. Source analyses further showed an occipito-parietal down-regulation concurrent to the increased involvement of temporal and frontal resources in the high VSWM context. Together, these data suggest that cognitive control mechanisms supporting working memory may selectively enhance concurrent visual processing related to an independent goal. More broadly, our findings are in line with theoretical models implicating the engagement of frontal regions in synchronizing and optimizing mnemonic and perceptual resources towards multiple goals.
Resumo:
Résumé: Le développement rapide de nouvelles technologies comme l'imagerie médicale a permis l'expansion des études sur les fonctions cérébrales. Le rôle principal des études fonctionnelles cérébrales est de comparer l'activation neuronale entre différents individus. Dans ce contexte, la variabilité anatomique de la taille et de la forme du cerveau pose un problème majeur. Les méthodes actuelles permettent les comparaisons interindividuelles par la normalisation des cerveaux en utilisant un cerveau standard. Les cerveaux standards les plus utilisés actuellement sont le cerveau de Talairach et le cerveau de l'Institut Neurologique de Montréal (MNI) (SPM99). Les méthodes de recalage qui utilisent le cerveau de Talairach, ou celui de MNI, ne sont pas suffisamment précises pour superposer les parties plus variables d'un cortex cérébral (p.ex., le néocortex ou la zone perisylvienne), ainsi que les régions qui ont une asymétrie très importante entre les deux hémisphères. Le but de ce projet est d'évaluer une nouvelle technique de traitement d'images basée sur le recalage non-rigide et utilisant les repères anatomiques. Tout d'abord, nous devons identifier et extraire les structures anatomiques (les repères anatomiques) dans le cerveau à déformer et celui de référence. La correspondance entre ces deux jeux de repères nous permet de déterminer en 3D la déformation appropriée. Pour les repères anatomiques, nous utilisons six points de contrôle qui sont situés : un sur le gyrus de Heschl, un sur la zone motrice de la main et le dernier sur la fissure sylvienne, bilatéralement. Evaluation de notre programme de recalage est accomplie sur les images d'IRM et d'IRMf de neuf sujets parmi dix-huit qui ont participés dans une étude précédente de Maeder et al. Le résultat sur les images anatomiques, IRM, montre le déplacement des repères anatomiques du cerveau à déformer à la position des repères anatomiques de cerveau de référence. La distance du cerveau à déformer par rapport au cerveau de référence diminue après le recalage. Le recalage des images fonctionnelles, IRMf, ne montre pas de variation significative. Le petit nombre de repères, six points de contrôle, n'est pas suffisant pour produire les modifications des cartes statistiques. Cette thèse ouvre la voie à une nouvelle technique de recalage du cortex cérébral dont la direction principale est le recalage de plusieurs points représentant un sillon cérébral. Abstract : The fast development of new technologies such as digital medical imaging brought to the expansion of brain functional studies. One of the methodolgical key issue in brain functional studies is to compare neuronal activation between individuals. In this context, the great variability of brain size and shape is a major problem. Current methods allow inter-individual comparisions by means of normalisation of subjects' brains in relation to a standard brain. A largerly used standard brains are the proportional grid of Talairach and Tournoux and the Montreal Neurological Insititute standard brain (SPM99). However, there is a lack of more precise methods for the superposition of more variable portions of the cerebral cortex (e.g, neocrotex and perisyvlian zone) and in brain regions highly asymmetric between the two cerebral hemipsheres (e.g. planum termporale). The aim of this thesis is to evaluate a new image processing technique based on non-linear model-based registration. Contrary to the intensity-based, model-based registration uses spatial and not intensitiy information to fit one image to another. We extract identifiable anatomical features (point landmarks) in both deforming and target images and by their correspondence we determine the appropriate deformation in 3D. As landmarks, we use six control points that are situated: one on the Heschl'y Gyrus, one on the motor hand area, and one on the sylvian fissure, bilaterally. The evaluation of this model-based approach is performed on MRI and fMRI images of nine of eighteen subjects participating in the Maeder et al. study. Results on anatomical, i.e. MRI, images, show the mouvement of the deforming brain control points to the location of the reference brain control points. The distance of the deforming brain to the reference brain is smallest after the registration compared to the distance before the registration. Registration of functional images, i.e fMRI, doesn't show a significant variation. The small number of registration landmarks, i.e. six, is obvious not sufficient to produce significant modification on the fMRI statistical maps. This thesis opens the way to a new computation technique for cortex registration in which the main directions will be improvement of the registation algorithm, using not only one point as landmark, but many points, representing one particular sulcus.
Resumo:
Concerning process control of batch cooling crystallization the present work focused on the cooling profile and seeding technique. Secondly, the influence of additives on batch-wise precipitation process was investigated. Moreover, a Computational Fluid Dynamics (CFD) model for simulation of controlled batch cooling crystallization was developed. A novel cooling model to control supersaturation level during batch-wise cooling crystallization was introduced. The crystallization kinetics together with operating conditions, i.e. seed loading, cooling rate and batch time, were taken into account in the model. Especially, the supersaturation- and suspension density- dependent secondary nucleation was included in the model. The interaction between the operating conditions and their influence on the control target, i.e. the constant level of supersaturation, were studied with the aid of a numerical solution for the cooling model. Further, the batch cooling crystallization was simulated with the ideal mixing model and CFD model. The moment transformation of the population balance, together with the mass and heat balances, were solved numerically in the simulation. In order to clarify a relationship betweenthe operating conditions and product sizes, a system chart was developed for anideal mixing condition. The utilization of the system chart to determine the appropriate operating condition to meet a required product size was introduced. With CFD simulation, batch crystallization, operated following a specified coolingmode, was studied in the crystallizers having different geometries and scales. The introduced cooling model and simulation results were verified experimentallyfor potassium dihydrogen phosphate (KDP) and the novelties of the proposed control policies were demonstrated using potassium sulfate by comparing with the published results in the literature. The study on the batch-wise precipitation showed that immiscible additives could promote the agglomeration of a derivative of benzoic acid, which facilitated the filterability of the crystal product.
Resumo:
We performed a number of tests with the aim to develop an effective extraction method for the analysis of carotenoid content in maize seed. Mixtures of methanol–ethyl acetate (6:4, v/v) and methanol–tetrahydrofuran (1:1, v/v) were the most effective solvent systems for carotenoid extraction from maize endosperm under the conditions assayed. In addition, we also addressed sample preparation prior to the analysis of carotenoids by liquid chromatography (LC). The LC response of extracted carotenoids and standards in several solvents was evaluated and results were related to the degree of solubility of these pigments. Three key factors were found to be important when selecting a suitable injection solvent: compatibility between the mobile phase and injection solvent, carotenoid polarity and content in the matrix.
Resumo:
Aim of study: To identify species of wood samples based on common names and anatomical analyses of their transversal surfaces (without microscopic preparations). Area of study: Spain and South America Material and methods: The test was carried out on a batch of 15 lumber samples deposited in the Royal Botanical Garden in Madrid, from the expedition by Ruiz and Pavon (1777-1811). The first stage of the methodology is to search and to make a critical analysis of the databases which list common nomenclature along with scientific nomenclature. A geographic filter was then applied to the information resulting from the samples with a more restricted distribution. Finally an anatomical verification was carried out with a pocket microscope with a magnification of x40, equipped with a 50 micrometers resolution scale. Main results: The identification of the wood based exclusively on the common name is not useful due to the high number of alternative possibilities (14 for “naranjo”, 10 for “ébano”, etc.). The common name of one of the samples (“huachapelí mulato”) enabled the geographic origin of the samples to be accurately located to the shipyard area in Guayaquil (Ecuador). Given that Ruiz y Pavon did not travel to Ecuador, the specimens must have been obtained by Tafalla. It was possible to determine correctly 67% of the lumber samples from the batch. In 17% of the cases the methodology did not provide a reliable identification. Research highlights: It was possible to determine correctly 67% of the lumber samples from the batch and their geographic provenance. The identification of the wood based exclusively on the common name is not useful.
Resumo:
The synthesis of a membrane-bound MalE ,B-galactosidase hybrid protein, when induced by growth of Escherichia coli on maltose, leads to inhibition of cell division and eventually a reduced rate of mass increase. In addition, the relative rate of synthesis of outer membrane proteins, but not that of inner membrane proteins, was reduced by about 50%o. Kinetic experiments demonstrated that this reduction coincided with the period of maximum synthesis of the hybrid protein (and another maltose-inducible protein, LamB). The accumulation of this abnormal protein in the envelope therefore appeared specifically to inhibit the synthesis, the assembly of outer membrane proteins, or both, indicating that the hybrid protein blocks some export site or causes the sequestration of some limiting factor(s) involved in the export process. Since the MalE protein is normally located in the periplasm, the results also suggest that the synthesis of periplasmic and outer membrane proteins may involve some steps in common. The reduced rate of synthesis of outer membrane proteins was also accompanied by the accumulation in the envelope of at least one outer membrane protein and at least two inner membrane proteins as higher-molecular-weight forms, indicating that processing (removal of the N-terminal signal sequence) was also disrupted by the presence of the hybrid protein. These results may indicate that the assembly of these membrane proteins is blocked at a relatively late step rather than at the level of primary recognition of some site by the signal sequence. In addition, the results suggest that some step common to the biogenesis of quite different kinds of envelope protein is blocked by the presence of the hybrid protein.
Resumo:
Micronization techniques based on supercritical fluids (SCFs) are promising for the production of particles with controlled size and distribution. The interest of the pharmaceutical field in the development of SCF techniques is increasing due to the need for clean processes, reduced consumption of energy, and to their several possible applications. The food field is still far from the application of SCF micronization techniques, but there is increasing interest mainly for the processing of products with high added value. The aim of this study is to use SCF micronization techniques for the production of particles of pharmaceuticals and food ingredients with controlled particle size and morphology, and to look at their production on semi-industrial scale. The results obtained are also used to understand the processes from the perspective of broader application within the pharmaceutical and food industries. Certain pharmaceuticals, a biopolymer and a food ingredient have been tested using supercritical antisolvent micronization (SAS) or supercritical assisted atomization (SAA) techniques. The reproducibility of the SAS technique has been studied using physically different apparatuses and on both laboratory and semi-industrial scale. Moreover, a comparison between semi-continuous and batch mode has been performed. The behaviour of the system during the SAS process has been observed using a windowed precipitation vessel. The micronized powders have been characterized by particle size and distribution, morphology and crystallinity. Several analyses have been performed to verify if the SCF process modified the structure of the compound or caused degradation or contamination of the product. The different powder morphologies obtained have been linked to the position of the process operating point with respect to the vapour-liquid equilibrium (VLE) of the systems studied, that is, mainly to the position of the mixture critical point (MCP) of the mixture. Spherical micro, submicro- and nanoparticles, expanded microparticles (balloons) and crystals were obtained by SAS. The obtained particles were amorphous or with different degrees of crystallinity and, in some cases, had different pseudo-polymorphic or polymorphic forms. A compound that could not be processed using SAS was micronized by SAA, and amorphous particles were obtained, stable in vials at room temperature. The SCF micronization techniques studied proved to be effective and versatile for the production of particles for several uses. Furthermore, the findings of this study and the acquired knowledge of the proposed processes can allow a more conscious application of SCF techniques to obtain products with the desired characteristics and enable the use of their principles for broader applications.
Resumo:
The sol-gel synthesis of bulk silica-based luminescent materials using innocuous hexaethoxydisilane and hexamethoxydisilane monomers, followed by one hour thermal annealing in an inert atmosphere at 950oC-1150oC, is reported. As-synthesized hexamethoxydisilane-derived samples exhibit an intense blue photoluminescence band, whereas thermally treated ones emit stronger photoluminescence radiation peaking below 600 nm. For hexaethoxydisilane-based material, annealed at or above 1000oC, a less intense photoluminescence band, peaking between 780 nm and 850 nm that is attributed to nanocrystalline silicon is observed. Mixtures of both precursors lead to composed spectra, thus envisaging the possibility of obtaining pre-designed spectral behaviors by varying the mixture composition.
Resumo:
Psychophysical studies suggest that humans preferentially use a narrow band of low spatial frequencies for face recognition. Here we asked whether artificial face recognition systems have an improved recognition performance at the same spatial frequencies as humans. To this end, we estimated recognition performance over a large database of face images by computing three discriminability measures: Fisher Linear Discriminant Analysis, Non-Parametric Discriminant Analysis, and Mutual Information. In order to address frequency dependence, discriminabilities were measured as a function of (filtered) image size. All three measures revealed a maximum at the same image sizes, where the spatial frequency content corresponds to the psychophysical found frequencies. Our results therefore support the notion that the critical band of spatial frequencies for face recognition in humans and machines follows from inherent properties of face images, and that the use of these frequencies is associated with optimal face recognition performance.
Resumo:
Operatiivisen tiedon tuottaminen loppukäyttäjille analyyttistä tarkastelua silmällä pitäen aiheuttaa ongelmia useille yrityksille. Diplomityö pyrkii ratkaisemaan ko. ongelman Teleste Oyj:ssä. Työ on jaettu kolmeen pääkappaleeseen. Kappale 2 selkiyttää On-Line Analytical Processing (OLAP)- käsitteen. Kappale 3 esittelee muutamia OLAP-tuotteiden valmistajia ja heidän arkkitehtuurejaan sekä tyypillisten sovellusalueiden lisäksi huomioon otettavia asioita OLAP käyttöönoton yhteydessä. Kappale 4, tuo esille varsinaisen ratkaisun. Teknisellä arkkitehtuurilla on merkittävä asema ratkaisun rakenteen kannalta. Tässä on sovellettu Microsoft:n tietovarasto kehysrakennetta. Kappaleen 4 edetessä, tapahtumakäsittelytieto muutetaan informaatioksi ja edelleen loppukäyttäjien tiedoksi. Loppukäyttäjät varustetaan tehokkaalla ja tosiaikaisella analysointityökalulla moniulotteisessa ympäristössä. Vaikka kiertonopeus otetaan työssä sovellusesimerkiksi, työ ei pyri löytämään optimaalista tasoa Telesten varastoille. Siitä huolimatta eräitä parannusehdotuksia mainitaan.
Resumo:
Diplomityössä on käsitelty paperin pinnankarkeuden mittausta, joka on keskeisimpiä ongelmia paperimateriaalien tutkimuksessa. Paperiteollisuudessa käytettävät mittausmenetelmät sisältävät monia haittapuolia kuten esimerkiksi epätarkkuus ja yhteensopimattomuus sileiden papereiden mittauksissa, sekä suuret vaatimukset laboratorio-olosuhteille ja menetelmien hitaus. Työssä on tutkittu optiseen sirontaan perustuvia menetelmiä pinnankarkeuden määrittämisessä. Konenäköä ja kuvan-käsittelytekniikoita tutkittiin karkeilla paperipinnoilla. Tutkimuksessa käytetyt algoritmit on tehty Matlab® ohjelmalle. Saadut tulokset osoittavat mahdollisuuden pinnankarkeuden mittaamiseen kuvauksen avulla. Parhaimman tuloksen perinteisen ja kuvausmenetelmän välillä antoi fraktaaliulottuvuuteen perustuva menetelmä.
Resumo:
This thesis gives an overview of the use of the level set methods in the field of image science. The similar fast marching method is discussed for comparison, also the narrow band and the particle level set methods are introduced. The level set method is a numerical scheme for representing, deforming and recovering structures in an arbitrary dimensions. It approximates and tracks the moving interfaces, dynamic curves and surfaces. The level set method does not define how and why some boundary is advancing the way it is but simply represents and tracks the boundary. The principal idea of the level set method is to represent the N dimensional boundary in the N+l dimensions. This gives the generality to represent even the complex boundaries. The level set methods can be powerful tools to represent dynamic boundaries, but they can require lot of computing power. Specially the basic level set method have considerable computational burden. This burden can be alleviated with more sophisticated versions of the level set algorithm like the narrow band level set method or with the programmable hardware implementation. Also the parallel approach can be used in suitable applications. It is concluded that these methods can be used in a quite broad range of image applications, like computer vision and graphics, scientific visualization and also to solve problems in computational physics. Level set methods and methods derived and inspired by it will be in the front line of image processing also in the future.