935 resultados para Two dimensional infrared spectroscopy correlation
Resumo:
Introduction ICM+ software encapsulates our 20 years' experience in brain monitoring. It collects data from a variety of bedside monitors and produces time trends of parameters defi ned using confi gurable mathematical formulae. To date it is being used in nearly 40 clinical research centres worldwide. We present its application for continuous monitoring of cerebral autoregulation using near-infrared spectroscopy (NIRS). Methods Data from multiple bedside monitors are processed by ICM+ in real time using a large selection of signal processing methods. These include various time and frequency domain analysis functions as well as fully customisable digital fi lters. The fi nal results are displayed in a variety of ways including simple time trends, as well as time window based histograms, cross histograms, correlations, and so forth. All this allows complex information from bedside monitors to be summarized in a concise fashion and presented to medical and nursing staff in a simple way that alerts them to the development of various pathological processes. Results One hundred and fi fty patients monitored continuously with NIRS, arterial blood pressure (ABP) and intracranial pressure (ICP), where available, were included in this study. There were 40 severely headinjured adult patients, 27 SAH patients (NCCU, Cambridge); 60 patients undergoing cardiopulmonary bypass (John Hopkins Hospital, Baltimore) and 23 patients with sepsis (University Hospital, Basel). In addition, MCA fl ow velocity (FV) was monitored intermittently using transcranial Doppler. FV-derived and ICP-derived pressure reactivity indices (PRx, Mx), as well as NIRS-derived reactivity indices (Cox, Tox, Thx) were calculated and showed signifi cant correlation with each other in all cohorts. Errorbar charts showing reactivity index PRx versus CPP (optimal CPP chart) as well as similar curves for NIRS indices versus CPP and ABP were also demonstrated. Conclusions ICM+ software is proving to be a very useful tool for enhancing the battery of available means for monitoring cerebral vasoreactivity and potentially facilitating autoregulation guided therapy. Complexity of data analysis is also hidden inside loadable profi les, thus allowing investigators to take full advantage of validated protocols including advanced processing formulas.
Resumo:
We prove a characterization of the support of the law of the solution for a stochastic wave equation with two-dimensional space variable, driven by a noise white in time and correlated in space. The result is a consequence of an approximation theorem, in the convergence of probability, for equations obtained by smoothing the random noise. For some particular classes of coefficients, approximation in the Lp-norm for p¿1 is also proved.
Resumo:
Quantifying the spatial configuration of hydraulic conductivity (K) in heterogeneous geological environments is essential for accurate predictions of contaminant transport, but is difficult because of the inherent limitations in resolution and coverage associated with traditional hydrological measurements. To address this issue, we consider crosshole and surface-based electrical resistivity geophysical measurements, collected in time during a saline tracer experiment. We use a Bayesian Markov-chain-Monte-Carlo (McMC) methodology to jointly invert the dynamic resistivity data, together with borehole tracer concentration data, to generate multiple posterior realizations of K that are consistent with all available information. We do this within a coupled inversion framework, whereby the geophysical and hydrological forward models are linked through an uncertain relationship between electrical resistivity and concentration. To minimize computational expense, a facies-based subsurface parameterization is developed. The Bayesian-McMC methodology allows us to explore the potential benefits of including the geophysical data into the inverse problem by examining their effect on our ability to identify fast flowpaths in the subsurface, and their impact on hydrological prediction uncertainty. Using a complex, geostatistically generated, two-dimensional numerical example representative of a fluvial environment, we demonstrate that flow model calibration is improved and prediction error is decreased when the electrical resistivity data are included. The worth of the geophysical data is found to be greatest for long spatial correlation lengths of subsurface heterogeneity with respect to wellbore separation, where flow and transport are largely controlled by highly connected flowpaths.
Resumo:
In common with many other plasma membrane glycoproteins of eukaryotic origin, the promastigote surface protease (PSP) of the protozoan parasite Leishmania contains a glycosyl-phosphatidylinositol (GPI) membrane anchor. The GPI anchor of Leishmania major PSP was purified following proteolysis of the PSP and analyzed by two-dimensional 1H-1H NMR, compositional and methylation linkage analyses, chemical and enzymatic modifications, and amino acid sequencing. From these results, the structure of the GPI-containing peptide was found to be Asp-Gly-Gly-Asn-ethanolamine-PO4-6Man alpha 1-6Man alpha 1-4GlcN alpha 1-6myo-inositol-1-PO4-(1-alkyl-2-acyl-glycerol). The glycan structure is identical to the conserved glycan core regions of the GPI anchor of Trypanosoma brucei variant surface glycoprotein and rat brain Thy-1 antigen, supporting the notion that this portion of GPIs are highly conserved. The phosphatidylinositol moiety of the PSP anchor is unusual, containing a fully saturated, unbranched 1-O-alkyl chain (mainly C24:0) and a mixture of fully saturated unbranched 2-O-acyl chains (C12:0, C14:0, C16:0, and C18:0). This lipid composition differs significantly from those of the GPIs of T. brucei variant surface glycoprotein and mammalian erythrocyte acetylcholinesterase but is similar to that of a family of glycosylated phosphoinositides found uniquely in Leishmania.
Resumo:
Near-infrared spectroscopy (NIRS) was used to analyse the crude protein content of dried and milled samples of wheat and to discriminate samples according to their stage of growth. A calibration set of 72 samples from three growth stages of wheat (tillering, heading and harvest) and a validation set of 28 samples was collected for this purpose. Principal components analysis (PCA) of the calibration set discriminated groups of samples according to the growth stage of the wheat. Based on these differences, a classification procedure (SIMCA) showed a very accurate classification of the validation set samples : all of them were successfully classified in each group using this procedure when both the residual and the leverage were used in the classification criteria. Looking only at the residuals all the samples were also correctly classified except one of tillering stage that was assigned to both tillering and heading stages. Finally, the determination of the crude protein content of these samples was considered in two ways: building up a global model for all the growth stages, and building up local models for each stage, separately. The best prediction results for crude protein were obtained using a global model for samples in the two first growth stages (tillering and heading), and using a local model for the harvest stage samples.
Resumo:
We consider a renormalizable two-dimensional model of dilaton gravity coupled to a set of conformal fields as a toy model for quantum cosmology. We discuss the cosmological solutions of the model and study the effect of including the back reaction due to quantum corrections. As a result, when the matter density is below some threshold new singularities form in a weak-coupling region, which suggests that they will not be removed in the full quantum theory. We also solve the Wheeler-DeWitt equation. Depending on the quantum state of the Universe, the singularities may appear in a quantum region where the wave function is not oscillatory, i.e., when there is not a well-defined notion of classical spacetime.
Resumo:
The most general black M5-brane solution of eleven-dimensional supergravity (with a flat R4 spacetime in the brane and a regular horizon) is characterized by charge, mass and two angular momenta. We use this metric to construct general dual models of large-N QCD (at strong coupling) that depend on two free parameters. The mass spectrum of scalar particles is determined analytically (in the WKB approximation) and numerically in the whole two-dimensional parameter space. We compare the mass spectrum with analogous results from lattice calculations, and find that the supergravity predictions are close to the lattice results everywhere on the two dimensional parameter space except along a special line. We also examine the mass spectrum of the supergravity Kaluza-Klein (KK) modes and find that the KK modes along the compact D-brane coordinate decouple from the spectrum for large angular momenta. There are however KK modes charged under a U(1)×U(1) global symmetry which do not decouple anywhere on the parameter space. General formulas for the string tension and action are also given.
Resumo:
We clarify some issues related to the evaluation of the mean value of the energy-momentum tensor for quantum scalar fields coupled to the dilaton field in two-dimensional gravity. Because of this coupling, the energy-momentum tensor for matter is not conserved and therefore it is not determined by the trace anomaly. We discuss different approximations for the calculation of the energy-momentum tensor and show how to obtain the correct amount of Hawking radiation. We also compute cosmological particle creation and quantum corrections to the Newtonian potential.
Resumo:
In traffic accidents involving motorcycles, paint traces can be transferred from the rider's helmet or smeared onto its surface. These traces are usually in the form of chips or smears and are frequently collected for comparison purposes. This research investigates the physical and chemical characteristics of the coatings found on motorcycles helmets. An evaluation of the similarities between helmet and automotive coating systems was also performed.Twenty-seven helmet coatings from 15 different brands and 22 models were considered. One sample per helmet was collected and observed using optical microscopy. FTIR spectroscopy was then used and seven replicate measurements per layer were carried out to study the variability of each coating system (intravariability). Principal Component Analysis (PCA) and Hierarchical Cluster Analysis (HCA) were also performed on the infrared spectra of the clearcoats and basecoats of the data set. The most common systems were composed of two or three layers, consistently involving a clearcoat and basecoat. The coating systems of helmets with composite shells systematically contained a minimum of three layers. FTIR spectroscopy results showed that acrylic urethane and alkyd urethane were the most frequent binders used for clearcoats and basecoats. A high proportion of the coatings were differentiated (more than 95%) based on microscopic examinations. The chemical and physical characteristics of the coatings allowed the differentiation of all but one pair of helmets of the same brand, model and color. Chemometrics (PCA and HCA) corroborated classification based on visual comparisons of the spectra and allowed the study of the whole data set at once (i.e., all spectra of the same layer). Thus, the intravariability of each helmet and its proximity to the others (intervariability) could be more readily assessed. It was also possible to determine the most discriminative chemical variables based on the study of the PCA loadings. Chemometrics could therefore be used as a complementary decision-making tool when many spectra and replicates have to be taken into account. Similarities between automotive and helmet coating systems were highlighted, in particular with regard to automotive coating systems on plastic substrates (microscopy and FTIR). However, the primer layer of helmet coatings was shown to differ from the automotive primer. If the paint trace contains this layer, the risk of misclassification (i.e., helmet versus vehicle) is reduced. Nevertheless, a paint examiner should pay close attention to these similarities when analyzing paint traces, especially regarding smears or paint chips presenting an incomplete layer system.
Resumo:
Infrared spectroscopy was used to characterize three series of a-Si:H/a-Si1-xCx:H multilayers in which their geometrical parameters were varied. The infrared active vibrational groups in their spectra and the interference fringes in their absorption-free zone were studied to analyze the interfaces and the changes that are produced in very thin layers. Our results show that hydrogen is bonded to silicon only in monohydride groups. No additional hydrogen could be detected at these interfaces. The deposition of very thin a-Si1-xCx:H layers seems to affect their porous structure, making them denser.
Resumo:
«Quel est l'âge de cette trace digitale?» Cette question est relativement souvent soulevée au tribunal ou lors d'investigations, lorsque la personne suspectée admet avoir laissé ses empreintes digitales sur une scène de crime mais prétend l'avoir fait à un autre moment que celui du crime et pour une raison innocente. Toutefois, aucune réponse ne peut actuellement être donnée à cette question, puisqu'aucune méthodologie n'est pour l'heure validée et acceptée par l'ensemble de la communauté forensique. Néanmoins, l'inventaire de cas américains conduit dans cette recherche a montré que les experts fournissent tout de même des témoignages au tribunal concernant l'âge de traces digitales, même si ceux-‐ci sont majoritairement basés sur des paramètres subjectifs et mal documentés. Il a été relativement aisé d'accéder à des cas américains détaillés, ce qui explique le choix de l'exemple. Toutefois, la problématique de la datation des traces digitales est rencontrée dans le monde entier, et le manque de consensus actuel dans les réponses données souligne la nécessité d'effectuer des études sur le sujet. Le but de la présente recherche est donc d'évaluer la possibilité de développer une méthode de datation objective des traces digitales. Comme les questions entourant la mise au point d'une telle procédure ne sont pas nouvelles, différentes tentatives ont déjà été décrites dans la littérature. Cette recherche les a étudiées de manière critique, et souligne que la plupart des méthodologies reportées souffrent de limitations prévenant leur utilisation pratique. Néanmoins, certaines approches basées sur l'évolution dans le temps de composés intrinsèques aux résidus papillaires se sont montrées prometteuses. Ainsi, un recensement détaillé de la littérature a été conduit afin d'identifier les composés présents dans les traces digitales et les techniques analytiques capables de les détecter. Le choix a été fait de se concentrer sur les composés sébacés détectés par chromatographie gazeuse couplée à la spectrométrie de masse (GC/MS) ou par spectroscopie infrarouge à transformée de Fourier. Des analyses GC/MS ont été menées afin de caractériser la variabilité initiale de lipides cibles au sein des traces digitales d'un même donneur (intra-‐variabilité) et entre les traces digitales de donneurs différents (inter-‐variabilité). Ainsi, plusieurs molécules ont été identifiées et quantifiées pour la première fois dans les résidus papillaires. De plus, il a été déterminé que l'intra-‐variabilité des résidus était significativement plus basse que l'inter-‐variabilité, mais que ces deux types de variabilité pouvaient être réduits en utilisant différents pré-‐ traitements statistiques s'inspirant du domaine du profilage de produits stupéfiants. Il a également été possible de proposer un modèle objectif de classification des donneurs permettant de les regrouper dans deux classes principales en se basant sur la composition initiale de leurs traces digitales. Ces classes correspondent à ce qui est actuellement appelé de manière relativement subjective des « bons » ou « mauvais » donneurs. Le potentiel d'un tel modèle est élevé dans le domaine de la recherche en traces digitales, puisqu'il permet de sélectionner des donneurs représentatifs selon les composés d'intérêt. En utilisant la GC/MS et la FTIR, une étude détaillée a été conduite sur les effets de différents facteurs d'influence sur la composition initiale et le vieillissement de molécules lipidiques au sein des traces digitales. Il a ainsi été déterminé que des modèles univariés et multivariés pouvaient être construits pour décrire le vieillissement des composés cibles (transformés en paramètres de vieillissement par pré-‐traitement), mais que certains facteurs d'influence affectaient ces modèles plus sérieusement que d'autres. En effet, le donneur, le substrat et l'application de techniques de révélation semblent empêcher la construction de modèles reproductibles. Les autres facteurs testés (moment de déposition, pression, température et illumination) influencent également les résidus et leur vieillissement, mais des modèles combinant différentes valeurs de ces facteurs ont tout de même prouvé leur robustesse dans des situations bien définies. De plus, des traces digitales-‐tests ont été analysées par GC/MS afin d'être datées en utilisant certains des modèles construits. Il s'est avéré que des estimations correctes étaient obtenues pour plus de 60 % des traces-‐tests datées, et jusqu'à 100% lorsque les conditions de stockage étaient connues. Ces résultats sont intéressants mais il est impératif de conduire des recherches supplémentaires afin d'évaluer les possibilités d'application de ces modèles dans des cas réels. Dans une perspective plus fondamentale, une étude pilote a également été effectuée sur l'utilisation de la spectroscopie infrarouge combinée à l'imagerie chimique (FTIR-‐CI) afin d'obtenir des informations quant à la composition et au vieillissement des traces digitales. Plus précisément, la capacité de cette technique à mettre en évidence le vieillissement et l'effet de certains facteurs d'influence sur de larges zones de traces digitales a été investiguée. Cette information a ensuite été comparée avec celle obtenue par les spectres FTIR simples. Il en a ainsi résulté que la FTIR-‐CI était un outil puissant, mais que son utilisation dans l'étude des résidus papillaires à des buts forensiques avait des limites. En effet, dans cette recherche, cette technique n'a pas permis d'obtenir des informations supplémentaires par rapport aux spectres FTIR traditionnels et a également montré des désavantages majeurs, à savoir de longs temps d'analyse et de traitement, particulièrement lorsque de larges zones de traces digitales doivent être couvertes. Finalement, les résultats obtenus dans ce travail ont permis la proposition et discussion d'une approche pragmatique afin d'aborder les questions de datation des traces digitales. Cette approche permet ainsi d'identifier quel type d'information le scientifique serait capable d'apporter aux enquêteurs et/ou au tribunal à l'heure actuelle. De plus, le canevas proposé décrit également les différentes étapes itératives de développement qui devraient être suivies par la recherche afin de parvenir à la validation d'une méthodologie de datation des traces digitales objective, dont les capacités et limites sont connues et documentées. -- "How old is this fingermark?" This question is relatively often raised in trials when suspects admit that they have left their fingermarks on a crime scene but allege that the contact occurred at a time different to that of the crime and for legitimate reasons. However, no answer can be given to this question so far, because no fingermark dating methodology has been validated and accepted by the whole forensic community. Nevertheless, the review of past American cases highlighted that experts actually gave/give testimonies in courts about the age of fingermarks, even if mostly based on subjective and badly documented parameters. It was relatively easy to access fully described American cases, thus explaining the origin of the given examples. However, fingermark dating issues are encountered worldwide, and the lack of consensus among the given answers highlights the necessity to conduct research on the subject. The present work thus aims at studying the possibility to develop an objective fingermark dating method. As the questions surrounding the development of dating procedures are not new, different attempts were already described in the literature. This research proposes a critical review of these attempts and highlights that most of the reported methodologies still suffer from limitations preventing their use in actual practice. Nevertheless, some approaches based on the evolution of intrinsic compounds detected in fingermark residue over time appear to be promising. Thus, an exhaustive review of the literature was conducted in order to identify the compounds available in the fingermark residue and the analytical techniques capable of analysing them. It was chosen to concentrate on sebaceous compounds analysed using gas chromatography coupled with mass spectrometry (GC/MS) or Fourier transform infrared spectroscopy (FTIR). GC/MS analyses were conducted in order to characterize the initial variability of target lipids among fresh fingermarks of the same donor (intra-‐variability) and between fingermarks of different donors (inter-‐variability). As a result, many molecules were identified and quantified for the first time in fingermark residue. Furthermore, it was determined that the intra-‐variability of the fingermark residue was significantly lower than the inter-‐variability, but that it was possible to reduce both kind of variability using different statistical pre-‐ treatments inspired from the drug profiling area. It was also possible to propose an objective donor classification model allowing the grouping of donors in two main classes based on their initial lipid composition. These classes correspond to what is relatively subjectively called "good" or "bad" donors. The potential of such a model is high for the fingermark research field, as it allows the selection of representative donors based on compounds of interest. Using GC/MS and FTIR, an in-‐depth study of the effects of different influence factors on the initial composition and aging of target lipid molecules found in fingermark residue was conducted. It was determined that univariate and multivariate models could be build to describe the aging of target compounds (transformed in aging parameters through pre-‐ processing techniques), but that some influence factors were affecting these models more than others. In fact, the donor, the substrate and the application of enhancement techniques seemed to hinder the construction of reproducible models. The other tested factors (deposition moment, pressure, temperature and illumination) also affected the residue and their aging, but models combining different values of these factors still proved to be robust. Furthermore, test-‐fingermarks were analysed with GC/MS in order to be dated using some of the generated models. It turned out that correct estimations were obtained for 60% of the dated test-‐fingermarks and until 100% when the storage conditions were known. These results are interesting but further research should be conducted to evaluate if these models could be used in uncontrolled casework conditions. In a more fundamental perspective, a pilot study was also conducted on the use of infrared spectroscopy combined with chemical imaging in order to gain information about the fingermark composition and aging. More precisely, its ability to highlight influence factors and aging effects over large areas of fingermarks was investigated. This information was then compared with that given by individual FTIR spectra. It was concluded that while FTIR-‐ CI is a powerful tool, its use to study natural fingermark residue for forensic purposes has to be carefully considered. In fact, in this study, this technique does not yield more information on residue distribution than traditional FTIR spectra and also suffers from major drawbacks, such as long analysis and processing time, particularly when large fingermark areas need to be covered. Finally, the results obtained in this research allowed the proposition and discussion of a formal and pragmatic framework to approach the fingermark dating questions. It allows identifying which type of information the scientist would be able to bring so far to investigators and/or Justice. Furthermore, this proposed framework also describes the different iterative development steps that the research should follow in order to achieve the validation of an objective fingermark dating methodology, whose capacities and limits are well known and properly documented.
Resumo:
This work presents an analysis of the assessment tools used by professors at the Universitat Politécnica de Catalunya to assess the generic competencies introduced in the Bachelor’s Degrees in Engineering. In order to conduct this study, a survey was designed and administered anonymously to a sample of the professors most receptive to educational innovation at their own university. All total, 80 professors responded to this survey, of whom 26% turned out to be members of the university’s own evaluation innovation group (https://www.upc.edu/rima/grups/grapa), GRAPA. This percentage represents 47% of the total GRAPA membership, meaning that nearly half of the professors most concerned about evaluation at the university chose to participate. The analysis of the variables carried out using the statistical program SPSS v19 shows that for practically 49% of those surveyed, rubrics are the tools most commonly used to assess generic competencies integrated in more specific ones. Of those surveyed, 60% use them either frequently or always. The most frequently evaluated generic competencies were teamwork (28%), problem solving (26%), effective oral and written communication (24%) and autonomous learning (13%), all of which constitute commonly recognized competencies in the engineering profession. A two-dimensional crosstabs analysis with SPSS v19 shows a significant correlation (Asymp. Sig. 0.001) between the type of tool used and the competencies assessed. However, no significant correlation was found between the type of assessment tool used and the type of subject, type of evaluation (formative or summative), frequency of feedback given to the students or the degree of student satisfaction, and thus none of these variables can be considered to have an influence on the kind of assessment tool used. In addition, the results also indicate that there are no significant differences between the instructors belonging to GRAPA and the rest of those surveyed
Resumo:
The catalytic dehydrogenation of ethylbenzene in presence of steam is the main commercial route to produce styrene. The industrial catalysts are potassium- and chromia-doped hematite which show low surface areas leading to bad performance and short life. In order to develop catalysts with high areas, the effect of beryllium on the textural properties and on the catalytic performance of this iron oxide was studied. The influence of the amount of the dopant, the starting material and the calcination temperature were also studied. In sample preparations, iron and beryllium salts (nitrate or sulfate) were hydrolyzed with ammonia and then calcinated. The experiments followed a factorial design with two variables in two levels (Fe/Be= 3 and 7; calcination temperature= 500 and 700ºC). Solids without any dopant were also prepared. Samples were characterized by elemental analysis, infrared spectroscopy, surface area and porosity measurements, X-ray diffraction, DSC and TG. The catalysts were tested in a microreactor at 524ºC and 1 atm, by using a mole ratio of steam/ ethylbenzene=10. The selectivity was measured by monitoring styrene, benzene and toluene formation. It was found that the effect of beryllium on the characteristics of hematite and on its catalytic performance depends on the starting material and on the amount of dopant. Surface areas increased due to the dopant as well as the nature of the precursor; samples produced by beryllium sulfate showed higher areas. Beryllium-doped solids showed a higher catalytic activity when compared to pure hematite, but no significant influence of the anion of starting material was noted. It can be concluded that beryllium acts as both textural and structural promoter. Samples with Fe/Be= 3, heated at 500ºC, lead to the highest conversion and were the most selective. However, catalysts prepared from beryllium sulfate are the most promising to ethylbenzene dehydrogenation due to their high surface area which could lead to a longer life.
Resumo:
This thesis presents experimental studies of rare earth (RE) metal induced structures on Si(100) surfaces. Two divalent RE metal adsorbates, Eu and Yb, are investigated on nominally flat Si(100) and on vicinal, stepped Si(100) substrates. Several experimental methods have been applied, including scanning tunneling microscopy/spectroscopy (STM/STS), low energy electron diffraction (LEED), synchrotron radiation photoelectron spectroscopy (SR-PES), Auger electron spectroscopy (AES), thermal desorption spectroscopy (TDS), and work function change measurements (Δφ). Two stages can be distinguished in the initial growth of the RE/Si interface: the formation of a two-dimensional (2D) adsorbed layer at submonolayer coverage and the growth of a three-dimensional (3D) silicide phase at higher coverage. The 2D phase is studied for both adsorbates in order to discover whether they produce common reconstructions or reconstructions common to the other RE metals. For studies of the 3D phase Yb is chosen due to its ability to crystallize in a hexagonal AlB2 type lattice, which is the structure of RE silicide nanowires, therefore allowing for the possibility of the growth of one-dimensional (1D) wires. It is found that despite their similar electronic configuration, Eu and Yb do not form similar 2D reconstructions on Si(100). Instead, a wealth of 2D structures is observed and atomic models are proposed for the 2×3-type reconstructions. In addition, adsorbate induced modifications on surface morphology and orientational symmetry are observed. The formation of the Yb silicide phase follows the Stranski-Krastanov growth mode. Nanowires with the hexagonal lattice are observed on the flat Si(100) substrate, and moreover, an unexpectedly large variety of growth directions are revealed. On the vicinal substrate the growth of the silicide phase as 3D islands and wires depends drastically on the growth conditions. The conditions under which wires with high aspect ratio and single orientation parallel to the step edges can be formed are demonstrated.
Resumo:
This article presents the results of a study of the efficiency of silanation process of calcium phosphate glasses particles and its effect on the bioactivity behavior of glasspoly( methyl methacrylate) (PMMA) composites. Two different calcium phosphate glasses: 44.5CaO-44.5P2O5-11Na2O (BV11) and 44.5CaO-44.5P2O5-6Na2O-5TiO2 (G5) were synthesized and treated with silane coupling agent. The glasses obtained were characterized by Microprobe and BET while the efficiency of silanation process was determined using Fourier Transform Infrared Spectroscopy (FTIR), X-ray Photoelectron Spectroscopy (XPS) and Thermal Analysis (DTA and TG)techniques. The content of coupling agent chemically tightly bond to the silanated glasses ascended to 1.69 6 0.02 wt % for BV11sil glass and 0.93 6 0.01 wt % for G5sil glass. The in vitro bioactivity test carried out in Simulated Body Fluid (SBF) revealed certain bioactive performance with the use of both silanated glasses in a 30% (by weight) as filler of the PMMA composites because of a superficial deposition of an apatite-like layer with low content of CO3 22 and HPO4 22 in its structure after soaking for 30 days occurred. VC 2013 Wiley Periodicals,Inc. J Biomed Mater Res Part B: Appl Biomater 00B: 000-000, 2013.