985 resultados para Matrices polymères


Relevância:

10.00% 10.00%

Publicador:

Resumo:

Extreme prematurity and pregnancy conditions leading to intrauterine growth restriction (IUGR) affect thousands of newborns every year and increase their risk for poor higher order cognitive and social skills at school age. However, little is known about the brain structural basis of these disabilities. To compare the structural integrity of neural circuits between prematurely born controls and children born extreme preterm (EP) or with IUGR at school age, long-ranging and short-ranging connections were noninvasively mapped across cortical hemispheres by connection matrices derived from diffusion tensor tractography. Brain connectivity was modeled along fiber bundles connecting 83 brain regions by a weighted characterization of structural connectivity (SC). EP and IUGR subjects, when compared with controls, had decreased fractional anisotropy-weighted SC (FAw-SC) of cortico-basal ganglia-thalamo-cortical loop connections while cortico-cortical association connections showed both decreased and increased FAw-SC. FAw-SC strength of these connections was associated with poorer socio-cognitive performance in both EP and IUGR children.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Työn tarkoituksena on kerätä yhteen tiedot kaikista maailmalta löytyvistä ison LOCA:n ulospuhallusvaiheen tutkimiseen käytetyistä koelaitteistoista. Työn tarkoituksena on myös antaa pohjaa päätökselle, onko tarpeellista rakentaa uusi koelaitteisto nesterakenne-vuorovaikutuskoodien laskennan validoimista varten. Ennen varsinaisen koelaitteiston rakentamista olisi tarkoituksenmukaista myös rakentaa pienempi pilottikoelaitteisto, jolla voitaisiin testata käytettäviä mittausmenetelmiä. Sopivaa mittausdataa tarvitaan uusien CFD-koodien ja rakenneanalyysikoodien kytketyn laskennan validoimisessa. Näitä koodeja voidaan käyttää esimerkiksi arvioitaessa reaktorin sisäosien rakenteellista kestävyyttä ison LOCA:n ulospuhallusvaiheen aikana. Raportti keskittyy maailmalta löytyviin koelaitteistoihin, uuden koelaitteiston suunnitteluperusteisiin sekä aiheeseen liittyviin yleisiin asioihin. Raportti ei korvaa olemassa olevia validointimatriiseja, mutta sitä voi käyttää apuna etsittäessä validointitarkoituksiin sopivaa ison LOCA:n ulospuhallusvaiheen koelaitteistoa.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The main objective of this study is to investigate whether the Finnish investors’ country-specific strategy concentrating on emerging markets provides diversification benefits. We also analyze whether the benefits of international diversification has been diminished after periods of high volatility caused by different market crisis. The objective is investigated with three methods: Correlation coefficients, rolling correlations added with OLS trend-lines and Box’s M statistic. All the empirical tests are analyzed and calculated with logarithmic returns of weekly time series data from Friday closing values between January 1995 and December 2007. The number of weekly observations is 678. The data type is total return indices of different countries. Data is collected from DataStream and provided by Datastream Financial. Countries investigated are Finland, Argentina, Brazil, Chile, China, India, Mexico, Poland, Russia, South Africa, South Korea, Thailand and Turkey. The current data is quoted both in U.S. Dollars and local currencies. The empirical results of this thesis show that the correlation coefficients are time-varying across Finland and 12 emerging market countries. Although the correlations have risen from 1995 to 2007, there can be found sub-periods where the correlation has declined from earlier period. The results also indicate that a Finnish investor constructing a portfolio of emerging market countries cannot rely on the correlation coefficients estimated from historical data because of the instability of correlation matrices.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This review presents the evolution of steroid analytical techniques, including gas chromatography coupled to mass spectrometry (GC-MS), immunoassay (IA) and targeted liquid chromatography coupled to mass spectrometry (LC-MS), and it evaluates the potential of extended steroid profiles by a metabolomics-based approach, namely steroidomics. Steroids regulate essential biological functions including growth and reproduction, and perturbations of the steroid homeostasis can generate serious physiological issues; therefore, specific and sensitive methods have been developed to measure steroid concentrations. GC-MS measuring several steroids simultaneously was considered the first historical standard method for analysis. Steroids were then quantified by immunoassay, allowing a higher throughput; however, major drawbacks included the measurement of a single compound instead of a panel and cross-reactivity reactions. Targeted LC-MS methods with selected reaction monitoring (SRM) were then introduced for quantifying a small steroid subset without the problems of cross-reactivity. The next step was the integration of metabolomic approaches in the context of steroid analyses. As metabolomics tends to identify and quantify all the metabolites (i.e., the metabolome) in a specific system, appropriate strategies were proposed for discovering new biomarkers. Steroidomics, defined as the untargeted analysis of the steroid content in a sample, was implemented in several fields, including doping analysis, clinical studies, in vivo or in vitro toxicology assays, and more. This review discusses the current analytical methods for assessing steroid changes and compares them to steroidomics. Steroids, their pathways, their implications in diseases and the biological matrices in which they are analysed will first be described. Then, the different analytical strategies will be presented with a focus on their ability to obtain relevant information on the steroid pattern. The future technical requirements for improving steroid analysis will also be presented.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The present work describes the development of a fast and robust analytical method for the determination of 53 antibiotic residues, covering various chemical groups and some of their metabolites, in environmental matrices that are considered important sources of antibiotic pollution, namely hospital and urban wastewaters, as well as in river waters. The method is based on automated off-line solid phase extraction (SPE) followed by ultra-high-performance liquid chromatography coupled to quadrupole linear ion trap tandem mass spectrometry (UHPLC–QqLIT). For unequivocal identification and confirmation, and in order to fulfill EU guidelines, two selected reaction monitoring (SRM) transitions per compound are monitored (the most intense one is used for quantification and the second one for confirmation). Quantification of target antibiotics is performed by the internal standard approach, using one isotopically labeled compound for each chemical group, in order to correct matrix effects. The main advantages of the method are automation and speed-up of sample preparation, by the reduction of extraction volumes for all matrices, the fast separation of a wide spectrum of antibiotics by using ultra-high-performance liquid chromatography, its sensitivity (limits of detection in the low ng/L range) and selectivity (due to the use of tandem mass spectrometry) The inclusion of β-lactam antibiotics (penicillins and cephalosporins), which are compounds difficult to analyze in multi-residue methods due to their instability in water matrices, and some antibiotics metabolites are other important benefits of the method developed. As part of the validation procedure, the method developed was applied to the analysis of antibiotics residues in hospital, urban influent and effluent wastewaters as well as in river water samples

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Connectivity analysis on diffusion MRI data of the whole- brain suffers from distortions caused by the standard echo- planar imaging acquisition strategies. These images show characteristic geometrical deformations and signal destruction that are an important drawback limiting the success of tractography algorithms. Several retrospective correction techniques are readily available. In this work, we use a digital phantom designed for the evaluation of connectivity pipelines. We subject the phantom to a âeurooetheoretically correctâeuro and plausible deformation that resembles the artifact under investigation. We correct data back, with three standard methodologies (namely fieldmap-based, reversed encoding-based, and registration- based). Finally, we rank the methods based on their geometrical accuracy, the dropout compensation, and their impact on the resulting connectivity matrices.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

L'exposition professionnelle aux nanomatériaux manufacturés dans l'air présente des risques potentiels pour la santé des travailleurs dans les secteurs de la nanotechnologie. Il est important de comprendre les scénarios de libération des aérosols de nanoparticules dans les processus et les activités associées à l'exposition humaine. Les mécanismes de libération, y compris les taux de libération et les propriétés physico-chimiques des nanoparticules, déterminent leurs comportements de transport ainsi que les effets biologiques néfastes. La distribution de taille des particules d'aérosols est l'un des paramètres les plus importants dans ces processus. La stabilité mécanique d'agglomérats de nanoparticules affecte leurs distributions de tailles. Les potentiels de désagglomération de ces agglomérats déterminent les possibilités de leur déformation sous énergies externes. Cela rend les changements possibles dans leur distribution de taille et de la concentration en nombre qui vont finalement modifier leurs risques d'exposition. Les conditions environnementales, telles que l'humidité relative, peuvent influencer les processus de désagglomération par l'adhérence de condensation capillaire de l'humidité. L'objectif général de cette thèse était d'évaluer les scénarios de libération des nanomatériaux manufacturés des processus et activités sur le lieu de travail. Les sous-objectifs étaient les suivants: 1. Etudier les potentiels de désagglomération des nanoparticules dans des conditions environnementales variées. 2. Etudier la libération des nano-objets à partir de nanocomposites polymères; 3. Evaluer la libération de nanoparticules sur le lieu de travail dans des situations concrètes. Nous avons comparé différents systèmes de laboratoire qui présentaient différents niveau d'énergie dans l'aérosolisation des poudres. Des nanopoudres de TiO2 avec des hydrophilicités de surface distinctes ont été testées. Un spectromètre à mobilité électrique (SMPS), un spectromètre à mobilité aérodynamique (APS) et un spectromètre optique (OPC) ont été utilisés pour mesurer la concentration de particules et la distribution de taille des particules. La microscopie électronique à transmission (TEM) a été utilisée pour l'analyse morphologique d'échantillons de particules dans l'air. Les propriétés des aérosols (distribution de taille et concentration en nombre) étaient différentes suivant la méthode employée. Les vitesses des flux d'air d'aérosolisation ont été utilisées pour estimer le niveau d'énergie dans ces systèmes, et il a été montré que les tailles modales des particules étaient inversement proportionnelles à la vitesse appliquée. En général, les particules hydrophiles ont des diamètres plus grands et des nombres inférieurs à ceux des particules hydrophobes. Toutefois, cela dépend aussi des méthodes utilisées. La vitesse de l'air peut donc être un paramètre efficace pour le classement de l'énergie des procédés pour des systèmes d'aérosolisation similaires. Nous avons développé un système laboratoire pour tester les potentiels de désagglomération des nanoparticules dans l'air en utilisant des orifices critiques et un humidificateur. Sa performance a été comparée à un système similaire dans un institut partenaire. Une variété de nanopoudres différentes a été testée. Le niveau d'énergie appliquée et l'humidité ont été modifiés. Le SMPS et l'OPC ont été utilisés pour mesurer la concentration de particules et la distribution de la taille. Un TEM a été utilisé pour l'analyse morphologique d'échantillons de particules dans l'air. Le diamètre moyen des particules a diminué et la concentration en nombre s'est accrue lorsque des énergies externes ont été appliquées. Le nombre de particules inférieures à 100 nm a été augmenté, et celui au-dessus de 350 nm réduits. Les conditions humides ont faits exactement le contraire, en particulier pour les petites particules. En outre, ils ont réduits les effets de la différence de pression due à l'orifice. Les résultats suggèrent que la désagglomération d'agglomérats de nanoparticules dans l'air est possible dans la gamme d'énergie appliquée. Cependant, l'atmosphère humide peut favoriser leur agglomération et améliorer leurs stabilités en réduisant la libération de nanoparticules dans l'environnement. Nous proposons d'utiliser notre système pour le test de routine des potentiels de désagglomération des nanomatériaux manufacturés et de les classer. Un tel classement faciliterait la priorisation de l'exposition et du risque encouru en fonction du niveau d'ENM. Un système de perçage automatique et un système de sciage manuel ont été développés pour étudier la libération de nanoparticules à partir de différents types de nanocomposites. La vitesse de perçage et taille de la mèche ont été modifiées dans les expériences. La distribution de taille des particules et leur concentration en nombre ont été mesurées par un SMPS et un miniature diffusion size classifier (DISCmini). Les distributions de nanoparticules dans les composites et les particules libérées ont été analysés par un TEM et un microscope électronique à balayage (SEM). Les tests de perçage ont libérés un plus grand nombre de particules que le sciage. Des vitesses de perçage plus rapide et les mèches plus grandes ont augmentés la génération de particules. Les charges de nanoparticules manufacturées dans les composites ne modifient pas leurs comportements de libération dans les expériences de perçage. Toutefois, le sciage différencie les niveaux de libération entre les composites et les échantillons blancs. De plus, les vapeurs de polymères ont été générées par la chaleur de sciage. La plupart des particules libérées sont des polymères contenant des nanoparticules ou sur leurs surface. Les résultats ont souligné l'importance du type de processus et paramètres pour déterminer la libération de nanoparticules de composites. Les émissions secondaires telles que les fumées polymères appellent à la nécessité d'évaluations de l'exposition et de risque pour de tels scénarios. Une revue systématique de la littérature sur le sujet de libérations de nanoparticules dans l'air dans les secteurs industriels et laboratoires de recherche a été effectuée. Des stratégies de recherche des informations pertinentes et de stockage ont été développées. Les mécanismes de libération, tels que la taille de particules d'aérosol et de leur concentration en nombre, ont été comparés pour différentes activités. La disponibilité de l'information contextuelle qui est pertinente pour l'estimation de l'exposition humaine a été évaluée. Il a été constaté que les données relatives à l'exposition ne sont pas toujours disponibles dans la littérature actuelle. Les propriétés des aérosols libérés semblent dépendre de la nature des activités. Des procédés à haute énergie ont tendance à générer des plus hauts niveaux de concentrations de particules dans les gammes de plus petite taille. Les résultats peuvent être utiles pour déterminer la priorité des procédés industriels pour l'évaluation les risques associés dans une approche à plusieurs niveaux. Pour l'évaluation de l'exposition, la disponibilité de l'information peut être améliorée par le développement d'une meilleure méthode de communication des données.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

NlmCategory="UNASSIGNED">A version of cascaded systems analysis was developed specifically with the aim of studying quantum noise propagation in x-ray detectors. Signal and quantum noise propagation was then modelled in four types of x-ray detectors used for digital mammography: four flat panel systems, one computed radiography and one slot-scan silicon wafer based photon counting device. As required inputs to the model, the two dimensional (2D) modulation transfer function (MTF), noise power spectra (NPS) and detective quantum efficiency (DQE) were measured for six mammography systems that utilized these different detectors. A new method to reconstruct anisotropic 2D presampling MTF matrices from 1D radial MTFs measured along different angular directions across the detector is described; an image of a sharp, circular disc was used for this purpose. The effective pixel fill factor for the FP systems was determined from the axial 1D presampling MTFs measured with a square sharp edge along the two orthogonal directions of the pixel lattice. Expectation MTFs were then calculated by averaging the radial MTFs over all possible phases and the 2D EMTF formed with the same reconstruction technique used for the 2D presampling MTF. The quantum NPS was then established by noise decomposition from homogenous images acquired as a function of detector air kerma. This was further decomposed into the correlated and uncorrelated quantum components by fitting the radially averaged quantum NPS with the radially averaged EMTF(2). This whole procedure allowed a detailed analysis of the influence of aliasing, signal and noise decorrelation, x-ray capture efficiency and global secondary gain on NPS and detector DQE. The influence of noise statistics, pixel fill factor and additional electronic and fixed pattern noises on the DQE was also studied. The 2D cascaded model and decompositions performed on the acquired images also enlightened the observed quantum NPS and DQE anisotropy.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Focal epilepsy is increasingly recognized as the result of an altered brain network, both on the structural and functional levels and the characterization of these widespread brain alterations is crucial for our understanding of the clinical manifestation of seizure and cognitive deficits as well as for the management of candidates to epilepsy surgery. Tractography based on Diffusion Tensor Imaging allows non-invasive mapping of white matter tracts in vivo. Recently, diffusion spectrum imaging (DSI), based on an increased number of diffusion directions and intensities, has improved the sensitivity of tractography, notably with respect to the problem of fiber crossing and recent developments allow acquisition times compatible with clinical application. We used DSI and parcellation of the gray matter in regions of interest to build whole-brain connectivity matrices describing the mutual connections between cortical and subcortical regions in patients with focal epilepsy and healthy controls. In addition, the high angular and radial resolution of DSI allowed us to evaluate also some of the biophysical compartment models, to better understand the cause of the changes in diffusion anisotropy. Global connectivity, hub architecture and regional connectivity patterns were altered in TLE patients and showed different characteristics in RTLE vs LTLE with stronger abnormalities in RTLE. The microstructural analysis suggested that disturbed axonal density contributed more than fiber orientation to the connectivity changes affecting the temporal lobes whereas fiber orientation changes were more involved in extratemporal lobe changes. Our study provides further structural evidence that RTLE and LTLE are not symmetrical entities and DSI-based imaging could help investigate the microstructural correlate of these imaging abnormalities.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Reversed phase liquid chromatography (RPLC) coupled to mass spectrometry (MS) is the gold standard technique in bioanalysis. However, hydrophilic interaction chromatography (HILIC) could represent a viable alternative to RPLC for the analysis of polar and/or ionizable compounds, as it often provides higher MS sensitivity and alternative selectivity. Nevertheless, this technique can be also prone to matrix effects (ME). ME are one of the major issues in quantitative LC-MS bioanalysis. To ensure acceptable method performance (i.e., trueness and precision), a careful evaluation and minimization of ME is required. In the present study, the incidence of ME in HILIC-MS/MS and RPLC-MS/MS was compared for plasma and urine samples using two representative sets of 38 pharmaceutical compounds and 40 doping agents, respectively. The optimal generic chromatographic conditions in terms of selectivity with respect to interfering compounds were established in both chromatographic modes by testing three different stationary phases in each mode with different mobile phase pH. A second step involved the assessment of ME in RPLC and HILIC under the best generic conditions, using the post-extraction addition method. Biological samples were prepared using two different sample pre-treatments, i.e., a non-selective sample clean-up procedure (protein precipitation and simple dilution for plasma and urine samples, respectively) and a selective sample preparation, i.e., solid phase extraction for both matrices. The non-selective pretreatments led to significantly less ME in RPLC vs. HILIC conditions regardless of the matrix. On the contrary, HILIC appeared as a valuable alternative to RPLC for plasma and urine samples treated by a selective sample preparation. Indeed, in the case of selective sample preparation, the compounds influenced by ME were different in HILIC and RPLC, and lower and similar ME occurrence was generally observed in RPLC vs. HILIC for urine and plasma samples, respectively. The complementary of both chromatographic modes was also demonstrated, as ME was observed only scarcely for urine and plasma samples when selecting the most appropriate chromatographic mode.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The extensional theory of arrays is one of the most important ones for applications of SAT Modulo Theories (SMT) to hardware and software verification. Here we present a new T-solver for arrays in the context of the DPLL(T) approach to SMT. The main characteristics of our solver are: (i) no translation of writes into reads is needed, (ii) there is no axiom instantiation, and (iii) the T-solver interacts with the Boolean engine by asking to split on equality literals between indices. As far as we know, this is the first accurate description of an array solver integrated in a state-of-the-art SMT solver and, unlike most state-of-the-art solvers, it is not based on a lazy instantiation of the array axioms. Moreover, it is very competitive in practice, specially on problems that require heavy reasoning on array literals

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Los análisis de Fourier permiten caracterizar el contorno del diente a partir de un número determinado de puntos y extraer una serie de parámetros para un posterior análisis multivariante. No obstante, la gran complejidad que presentan algunas conformaciones, obliga a comprobar cuántos puntos son necesarios para una correcta representación de ésta. El objetivo de este trabajo es aplicar y validar los análisis de Fourier (Polar y Elíptico) en el estudio de la forma dental a partir de diferentes puntos de contorno y explorar la variabilidad morfométrica en diferentes géneros. Se obtuvieron fotografías digitales de la superfi cie oclusal en segundos molares inferiores (M2s) de 4 especies de Primates (Hylobates moloch, Gorilla beringei graueri, Pongo pygmaeus pygmaeus y Pan troglodytes schweirfurthii) y se defi nió su contorno con 30, 40, 60, 80, 100 y 120 puntos y su representación formal a 10 armónicos. El análisis de la variabilidad morfométrica se realizó mediante la aplicación de Análisis Discriminantes y un NP-MANOVA a partir de matrices de distancias para determinar la variabilidad y porcentajes de clasifi cacióncorrecta, a nivel metodológico y taxonómico. Los resultados indicaron que los análisis de forma con series de Fourier permiten analizar la variabilidad morfométrica de M2s en géneros de Hominoidea, con independencia del número de puntos de contorno (30 a 120). Los porcentajes de clasifi cación son más variables e inferiores con el uso de la serie Polar (≈60-90) que con la Elíptica (75-100%). Un número entre 60-100 puntos de contorno mediante el método elíptico garantiza una descripción correcta de la forma del diente.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Capillary electrophoresis (CE) encompasses a number of characteristics quite suitable for the simultaneous analysis of small ions such as high efficiency and resolving power, directly associated to its impressively high peak capacity, and short analysis time. In appropriate conditions, it is possible to perform the separation of approximately 36 anions in less than 3 minutes. In this work, the mechanisms by which anion analysis is performed was criteriously discussed, and a thorough review of the literature in the past 5 years, focusing mostly in applications of CE to anion analysis in real matrices, was presented.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Several hundreds of artificial radionuclides are produced as the result of human activities, such as the applications of nuclear reactors and particle accelerators, testing of nuclear weapons and nuclear accidents. Many of these radionuclides are short-lived and decay quickly after their production, but some of them are longer-lived and are released into the environment. From the radiological point of view the most important radionuclides are cesium-137, strontium-90 and plutonium-239, due to their chemical and nuclear characteristics. The two first radioisotopes present long half life (30 and 28 years), high fission yields and chemical behaviour similar to potassium and calcium, respectively. No stable element exists for plutonium-239, that presents high radiotoxicity, long half-life (24000 years) and some marine organisms accumulate plutonium at high levels. The radionuclides introduced into marine environment undergo various physical, chemical and biological processes taking place in the sea. These processes may be due to physical dispersion or complicated chemical and biological interactions of the radionuclides with inorganic and organic suspend matter, variety of living organisms, bottom sediments, etc. The behaviour of radionuclides in the sea depends primarily on their chemical properties, but it may also be influenced by properties of interacting matrices and other environmental factors. The major route of radiation exposure of man to artificial radionuclides occuring in the marine environment is through ingestion of radiologically contamined marine organisms. This paper summarizes the main sources of contamination in the marine environment and presents an overview covering the oceanic distribution of anthropogenic radionuclides in the FAO regions. A great number of measurements of artificial radionuclides have been carried out on various marine environmental samples in different oceans over the world, being cesium-137 the most widely measured radionuclide. Radionuclide concentrations vary from region to region, according to the specific sources of contamination. In some regions, such as the Irish Sea, the Baltic Sea and the Black Sea, the concentrations depend on the inputs due to discharges from reprocessing facilities and from Chernobyl accident. In Brazil, the artificial radioactivity is low and corresponds to typical deposition values due to fallout for the Southern Hemisphere.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This work describes the selective hydrolysis of carboxyamide groups of asparagine and glutamine of collagen matrices for the preparation of negatively charged collagen biomaterials. The reaction was performed in the presence of chloride and sulfate salts of alkaline and alkaline earth metals in aqueous dimethylsulfoxide solution and, selectively hydrolysis of carboxyamide groups of collagen matrices was promoted without cleavage of the peptide bond. The result is a new collagen material with controlled increase in negative charge content. Although triple helix secondary structure of tropocollagen was preserved, significative changes in thermal stabilities were observed in association with a new pattern of tropocollagen macromolecular association, particularly in respect microfibril assembly, thus providing at physiological pH a new type of collagen structure for biomaterial preparation, characterized by different charge and structural contents .