975 resultados para Matrix Analytic Methods
Resumo:
The SPECT (Single Photon Emission Computed Tomography) systems are part of a medical image acquisition technology which has been outstanding, because the resultant images are functional images complementary to those that give anatomic information, such as X-Ray CT, presenting a high diagnostic value. These equipments acquire, in a non-invasive way, images from the interior of the human body through tomographic mapping of radioactive material administered to the patient. The SPECT systems are based on the Gamma Camera detection system, and one of them being set on a rotational gantry is enough to obtain the necessary data for a tomographic image. The images obtained from the SPECT system consist in a group of flat images that describe the radioactive distribution on the patient. The trans-axial cuts are obtained from the tomographic reconstruction techniques. There are analytic and iterative methods to obtain the tomographic reconstruction. The analytic methods are based on the Fourier Cut Theorem (FCT), while the iterative methods search for numeric solutions to solve the equations from the projections. Within the analytic methods, the filtered backprojection (FBP) method maybe is the simplest of all the tomographic reconstruction techniques. This paper's goal is to present the operation of the SPECT system, the Gamma Camera detection system, some tomographic reconstruction techniques and the requisites for the implementation of this system in a Nuclear Medicine service
Resumo:
This qualitative research was performed with the goal of investigating the contributions of theoretical and methodological perspectives in the alphabetization process, aiming to train proficient writers of texts with communicative function. Hence, such were the goals established for this research: giving a retrospective look on the synthetic and analytic methods of teaching literacy; verifying the possibilities and limits of theoretical and methodological approaches of acquiring the ability to write and identifying the teachers’ ideas concerning basic concepts (such as literacy, text and textual genres) imbued in the teaching of writing; and establishing which activities they develop in the course of teaching whether the pedagogical practices of the teacher observed contributed to the formation of students who were capable of producing significant texts of diverse genres, suitable to the objectives, to the readers, and to the context of the circulation of texts. The research was done in a public elementary school, in a 2nd grade class of the city of Jaú. Subjects of the research were four teachers and twenty-eight children. For this work, two instruments were used for data collecting: a questionnaire and direct observation, registered in research diaries. Among the observations made were that, although there are many studies on literacy, the practices actually adopted in class are empirical in nature and do not contribute to the process of developing individual writing or reading skills at all; rather, it is only valued the ability of establishing relations between phonemes and graphemes; literacy teachers do not have adequate knowledge of concepts of literacy, teaching, text and textual genres, valuing only one methodological theory and disregarding other contributions on the field of text production
Resumo:
Conselho Nacional de Desenvolvimento Científico e Tecnológico (CNPq)
Resumo:
Abstracts : The development of analytic methods more selective and sensitive is of great importance for a better quality in the determination of chemical species, therefore increasing the reliability of the results. In this way, the optimization of separation/concentration is still necessary. The use of Molecularly Imprinted Polymers - MIPs have demonstrated to be an efficient tool of analysis with a great potential in minimizing limitations of separation/concentration techniques traditionally employed. In general, the MIPs are obtained by polymerization in the presence of a template to be imprinted so that a polymeric skeleton is formed around the future analyte. In the present work, the template used is Estradiol Valerate (EV), compound used in the hormone replacement therapy (HRT) during climacteric. After the polymerization in bulk and in an anaerobic environment using MAA, EGDMA, AIBN, acetonitrile and VE, the obtained MIP was powdered, sifted (<120 μm) and placed in a soxhlet system containing ethanol at 60 °C, in order to remove the imprinted molecule through six successive washes in periods of 24 hours. The water used in the washings was analyzed using HPLC and spectrophotometry UV/Vis. Then, the obtained MIP was dried at room temperature and 150 mg was inset in SPE cartridges in order to evaluate the polymer's efficiency in the analyte pre-concentration and extraction. To do so, 100,0 mL of VE standard solution (2mg L-1) were pre-concentrated at 4,0 mL min-1 and eluted with 10,0 mL ethanol at 1,0 mL min-1, obtaining recoveries of 53%. Additionally, a NIP (non-imprinting polymer) was prepared to compare the obtained results, in which the recovery was 80%. In the same way, studies were conducted using commercial Strata™-X cartridges, obtaining 53% recovery. Since, the results did not reflect that than was expected, in relation with the MIP efficiency in the recovery, a computational ...
Resumo:
Analytic methods were applied and validated to measure residues of chlorfenvinphos, fipronil, and cypermethrin in meat and bovine fat, using the QuEChERS method and gas chromatography-mass spectrometry. For the meat, 2 g of sample, 4mL of acetonitrile, 1.6 g of MgSO4, and 0.4 g of NaCl were used in the liquid-liquid partition, while 80 mg of C18, 80 mg of primary and secondary amine and 150 mg of MgSO4 were employed in the dispersive solid-phase extraction. For the fat, 1 g of sample, 5 mL of hexane, 10 mL of water, 10 mL of acetonitrile, 4 g of MgSO4, and 0.5 g of NaCl were used in the liquid-liquid partition and 50 mg of primary and secondary amine and 150 mg of MgSO4 were used in the dispersive solid-phase extraction. The recovery percentages obtained for the pesticides in meat at different concentrations ranged from 81 to 129% with relative standard deviation below 27%. The corresponding results from the fat ranged from 70 to 123% with relative standard deviation below 25%. The methods showed sensitivity, precision, and accuracy according to EPA standards and quantification limits below the maximum residue limit established by European Union, except for chlorfenvinphos in the fat.
Resumo:
Nuclear Magnetic Resonance (NMR) is a branch of spectroscopy that is based on the fact that many atomic nuclei may be oriented by a strong magnetic field and will absorb radiofrequency radiation at characteristic frequencies. The parameters that can be measured on the resulting spectral lines (line positions, intensities, line widths, multiplicities and transients in time-dependent experi-ments) can be interpreted in terms of molecular structure, conformation, molecular motion and other rate processes. In this way, high resolution (HR) NMR allows performing qualitative and quantitative analysis of samples in solution, in order to determine the structure of molecules in solution and not only. In the past, high-field NMR spectroscopy has mainly concerned with the elucidation of chemical structure in solution, but today is emerging as a powerful exploratory tool for probing biochemical and physical processes. It represents a versatile tool for the analysis of foods. In literature many NMR studies have been reported on different type of food such as wine, olive oil, coffee, fruit juices, milk, meat, egg, starch granules, flour, etc using different NMR techniques. Traditionally, univariate analytical methods have been used to ex-plore spectroscopic data. This method is useful to measure or to se-lect a single descriptive variable from the whole spectrum and , at the end, only this variable is analyzed. This univariate methods ap-proach, applied to HR-NMR data, lead to different problems due especially to the complexity of an NMR spectrum. In fact, the lat-ter is composed of different signals belonging to different mole-cules, but it is also true that the same molecules can be represented by different signals, generally strongly correlated. The univariate methods, in this case, takes in account only one or a few variables, causing a loss of information. Thus, when dealing with complex samples like foodstuff, univariate analysis of spectra data results not enough powerful. Spectra need to be considered in their wholeness and, for analysing them, it must be taken in consideration the whole data matrix: chemometric methods are designed to treat such multivariate data. Multivariate data analysis is used for a number of distinct, differ-ent purposes and the aims can be divided into three main groups: • data description (explorative data structure modelling of any ge-neric n-dimensional data matrix, PCA for example); • regression and prediction (PLS); • classification and prediction of class belongings for new samples (LDA and PLS-DA and ECVA). The aim of this PhD thesis was to verify the possibility of identify-ing and classifying plants or foodstuffs, in different classes, based on the concerted variation in metabolite levels, detected by NMR spectra and using the multivariate data analysis as a tool to inter-pret NMR information. It is important to underline that the results obtained are useful to point out the metabolic consequences of a specific modification on foodstuffs, avoiding the use of a targeted analysis for the different metabolites. The data analysis is performed by applying chemomet-ric multivariate techniques to the NMR dataset of spectra acquired. The research work presented in this thesis is the result of a three years PhD study. This thesis reports the main results obtained from these two main activities: A1) Evaluation of a data pre-processing system in order to mini-mize unwanted sources of variations, due to different instrumental set up, manual spectra processing and to sample preparations arte-facts; A2) Application of multivariate chemiometric models in data analy-sis.
Resumo:
In vielen Teilgebieten der Mathematik ist es w"{u}nschenswert, die Monodromiegruppe einer homogenen linearen Differenzialgleichung zu verstehen. Es sind nur wenige analytische Methoden zur Berechnung dieser Gruppe bekannt, daher entwickeln wir im ersten Teil dieser Arbeit eine numerische Methode zur Approximation ihrer Erzeuger.rnIm zweiten Abschnitt fassen wir die Grundlagen der Theorie der Uniformisierung Riemannscher Fl"achen und die der arithmetischen Fuchsschen Gruppen zusammen. Auss erdem erkl"aren wir, wie unsere numerische Methode bei der Bestimmung von uniformisierenden Differenzialgleichungen dienlich sein kann. F"ur arithmetische Fuchssche Gruppen mit zwei Erzeugern erhalten wir lokale Daten und freie Parameter von Lam'{e} Gleichungen, welche die zugeh"origen Riemannschen Fl"achen uniformisieren. rnIm dritten Teil geben wir einen kurzen Abriss zur homologischen Spiegelsymmetrie und f"uhren die $widehat{Gamma}$-Klasse ein. Wir erkl"aren wie diese genutzt werden kann, um eine Hodge-theoretische Version der Spiegelsymmetrie f"ur torische Varit"aten zu beweisen. Daraus gewinnen wir Vermutungen "uber die Monodromiegruppe $M$ von Picard-Fuchs Gleichungen von gewissen Familien $f:mathcal{X}rightarrow bbp^1$ von $n$-dimensionalen Calabi-Yau Variet"aten. Diese besagen erstens, dass bez"uglich einer nat"urlichen Basis die Monodromiematrizen in $M$ Eintr"age aus dem K"orper $bbq(zeta(2j+1)/(2 pi i)^{2j+1},j=1,ldots,lfloor (n-1)/2 rfloor)$ haben. Und zweitens, dass sich topologische Invarianten des Spiegelpartners einer generischen Faser von $f:mathcal{X}rightarrow bbp^1$ aus einem speziellen Element von $M$ rekonstruieren lassen. Schliess lich benutzen wir die im ersten Teil entwickelten Methoden zur Verifizierung dieser Vermutungen, vornehmlich in Hinblick auf Dimension drei. Dar"uber hinaus erstellen wir eine Liste von Kandidaten topologischer Invarianten von vermutlich existierenden dreidimensionalen Calabi-Yau Variet"aten mit $h^{1,1}=1$.
Resumo:
Surgical repair of the rotator cuff repair is one of the most common procedures in orthopedic surgery. Despite it being the focus of much research, the physiological tendon-bone insertion is not recreated following repair and there is an anatomic non-healing rate of up to 94%. During the healing phase, several growth factors are upregulated that induce cellular proliferation and matrix deposition. Subsequently, this provisional matrix is replaced by the definitive matrix. Leukocyte- and platelet-rich fibrin (L-PRF) contain growth factors and has a stable dense fibrin matrix. Therefore, use of LPRF in rotator cuff repair is theoretically attractive. The aim of the present study was to determine 1) the optimal protocol to achieve the highest leukocyte content; 2) whether L-PRF releases growth factors in a sustained manner over 28 days; 3) whether standard/gelatinous or dry/compressed matrix preparation methods result in higher growth factor concentrations. 1) The standard L-PRF centrifugation protocol with 400 x g showed the highest concentration of platelets and leukocytes. 2) The L-PRF clots cultured in medium showed a continuous slow release with an increase in the absolute release of growth factors TGF-β1, VEGF and MPO in the first 7 days, and for IGF1, PDGF-AB and platelet activity (PF4=CXCL4) in the first 8 hours, followed by a decrease to close to zero at 28 days. Significantly higher levels of growth factor were expressed relative to the control values of normal blood at each culture time point. 3) Except for MPO and the TGFβ-1, there was always a tendency towards higher release of growth factors (i.e., CXCL4, IGF-1, PDGF-AB, and VEGF) in the standard/gelatinous- compared to the dry/compressed group. L-PRF in its optimal standard/gelatinous-type matrix can store and deliver locally specific healing growth factors for up to 28 days and may be a useful adjunct in rotator cuff repair.
Resumo:
In an accelerated exclusion process (AEP), each particle can "hop" to its adjacent site if empty as well as "kick" the frontmost particle when joining a cluster of size ℓ⩽ℓ_{max}. With various choices of the interaction range, ℓ_{max}, we find that the steady state of AEP can be found in a homogeneous phase with augmented currents (AC) or a segregated phase with holes moving at unit velocity (UV). Here we present a detailed study on the emergence of the novel phases, from two perspectives: the AEP and a mass transport process (MTP). In the latter picture, the system in the UV phase is composed of a condensate in coexistence with a fluid, while the transition from AC to UV can be regarded as condensation. Using Monte Carlo simulations, exact results for special cases, and analytic methods in a mean field approach (within the MTP), we focus on steady state currents and cluster sizes. Excellent agreement between data and theory is found, providing an insightful picture for understanding this model system.
Resumo:
PURPOSE: The low diagnostic yield of vitrectomy specimen analysis in chronic idiopathic uveitis (CIU) has been related to the complex nature of the underlying disease and to methodologic and tissue immanent factors in older studies. In an attempt to evaluate the impact of recently acquired analytic methods, the authors assessed the current diagnostic yield in CIU. METHODS: Retrospective analysis of consecutive vitrectomy specimens from patients with chronic endogenous uveitis (n = 56) in whom extensive systemic workup had not revealed a specific diagnosis (idiopathic) and medical treatment had not resulted in a satisfying clinical situation. Patients with acute postoperative endophthalmitis served a basis for methodologic comparison (Group 2; n = 21). RESULTS: In CIU, a specific diagnosis provided in 17.9% and a specific diagnosis excluded in 21.4%. In 60.7% the laboratory investigations were inconclusive. In postoperative endophthalmitis, microbiological culture established the infectious agent in 47.6%. In six of eight randomly selected cases, eubacterial PCR identified bacterial DNA confirming the culture results in three, remaining negative in two with a positive culture and being positive in three no growth specimens. A double negative result never occurred, suggesting a very high detection rate, when both tests were applied. CONCLUSIONS: The diagnostic yield of vitrectomy specimen analysis has not been improved by currently routinely applied methods in recent years in contrast to the significantly improved sensitivity of combined standardized culture and PCR analysis in endophthalmitis. Consequently, the low diagnostic yield in CIU has to be attributed to insufficient understanding of the underlying pathophysiologic mechanisms.
Resumo:
An international graduate teaching assistant‘s way of speaking may pose a challenge for college students enrolled in STEM courses at American universities. Students commonly complain that unfamiliar accents interfere with their ability to comprehend the IGTA or that they have difficulty making sense of the IGTA‘s use of words or phrasing. These frustrations are echoed by parents who pay tuition bills. The issue has provoked state and national legislative debates over universities‘ use of IGTAs. However, potentially productive debates and interventions have been stalemated due to the failure to confront deeply embedded myths and cultural models that devalue otherness and privilege dominant peoples, processes, and knowledge. My research implements a method of inquiry designed to identify and challenge these cultural frameworks in order to create an ideological/cultural context that will facilitate rather than impede the valuable efforts that are already in place. Discourse theorist Paul Gee‘s concepts of master myth, cultural models, and meta-knowledge offer analytical tools that I have adapted in a unique research approach emphasizing triangulation of both analytic methods and data sites. I examine debates over IGTA‘s use of language in the classroom among policy-makers, parents of college students, and scholars and teachers. First, the article "Teach Impediment" provides a particularly lucid account of the public debate over IGTAs. My analysis evidences the cultural hold of the master myth of monolingualism in public policy-making. Second, Michigan Technological University‘s email listserve Parentnet is analyzed to identify cultural models supporting monolingualism implicit in everyday conversation. Third, a Chronicle of Higher Education colloquy forum is analyzed to explore whether scholars and teachers who draw on communication and linguistic research overcome the ideological biases identified in earlier chapters. My analysis indicates that a persistent ideological bias plays out in these data sites, despite explicit claims by invested speakers to the contrary. This bias is a key reason why monolingualism remains so tenaciously a part of educational practice. Because irrational expectations and derogatory assumptions have gone unchallenged, little progress has been made despite decades of earnest work and good intentions. Therefore, my recommendations focus on what we say not what we intend.
Resumo:
Background: Recently, Cipriani and colleagues examined the relative efficacy of 12 new-generation antidepressants on major depression using network meta-analytic methods. They found that some of these medications outperformed others in patient response to treatment. However, several methodological criticisms have been raised about network meta-analysis and Cipriani’s analysis in particular which creates the concern that the stated superiority of some antidepressants relative to others may be unwarranted. Materials and Methods: A Monte Carlo simulation was conducted which involved replicating Cipriani’s network metaanalysis under the null hypothesis (i.e., no true differences between antidepressants). The following simulation strategy was implemented: (1) 1000 simulations were generated under the null hypothesis (i.e., under the assumption that there were no differences among the 12 antidepressants), (2) each of the 1000 simulations were network meta-analyzed, and (3) the total number of false positive results from the network meta-analyses were calculated. Findings: Greater than 7 times out of 10, the network meta-analysis resulted in one or more comparisons that indicated the superiority of at least one antidepressant when no such true differences among them existed. Interpretation: Based on our simulation study, the results indicated that under identical conditions to those of the 117 RCTs with 236 treatment arms contained in Cipriani et al.’s meta-analysis, one or more false claims about the relative efficacy of antidepressants will be made over 70% of the time. As others have shown as well, there is little evidence in these trials that any antidepressant is more effective than another. The tendency of network meta-analyses to generate false positive results should be considered when conducting multiple comparison analyses.
Resumo:
Staphylococcus aureus is globally one of the most important pathogens causing contagious mastitis in cattle. Previous studies using ribosomal spacer (RS)-PCR, however, demonstrated in Swiss cows that Staph. aureus isolated from bovine intramammary infections are genetically heterogeneous, with Staph. aureus genotype B (GTB) and GTC being the most prominent genotypes. Furthermore, Staph. aureus GTB was found to be contagious, whereas Staph. aureus GTC and all the remaining genotypes were involved in individual cow disease. In addition to RS-PCR, other methods for subtyping Staph. aureus are known, including spa typing and multilocus sequence typing (MLST). They are based on sequencing the spa and various housekeeping genes, respectively. The aim of the present study was to compare the 3 analytic methods using 456 strains of Staph. aureus isolated from milk of bovine intramammary infections and bulk tanks obtained from 12 European countries. Furthermore, the phylogeny of animal Staph. aureus was inferred and the zoonotic transfer of Staph. aureus between cattle and humans was studied. The analyzed strains could be grouped into 6 genotypic clusters, with CLB, CLC, and CLR being the most prominent ones. Comparing the 3 subtyping methods, RS-PCR showed the highest resolution, followed by spa typing and MLST. We found associations among the methods but in many cases they were unsatisfactory except for CLB and CLC. Cluster CLB was positive for clonal complex (CC)8 in 99% of the cases and typically positive for t2953; it is the cattle-adapted form of CC8. Cluster CLC was always positive for t529 and typically positive for CC705. For CLR and the remaining subtypes, links among the 3 methods were generally poor. Bovine Staph. aureus is highly clonal and a few clones predominate. Animal Staph. aureus always evolve from human strains, such that every human strain may be the ancestor of a novel animal-adapted strain. The zoonotic transfer of IMI- and milk-associated strains of Staph. aureus between cattle and humans seems to be very limited and different hosts are not considered as a source for mutual, spontaneous infections. Spillover events, however, may happen.
Resumo:
Sequential insertion of different dyes into the 1D channels of zeolite L (ZL) leads to supramolecular sandwich structures and allows the formation of sophisticated antenna composites for light harvesting, transport, and trapping. The synthesis and properties of dye molecules, host materials, composites, and composites embedded in polymer matrices, including two- and three-color antenna systems, are described. Perylene diimide (PDI) dyes are an important class of chromophores and are of great interest for the synthesis of artificial antenna systems. They are especially well suited to advancing our understanding of the structure–transport relationship in ZL because their core fits tightly through the 12-ring channel opening. The substituents at both ends of the PDIs can be varied to a large extent without influencing their electronic absorption and fluorescence spectra. The intercalation/insertion of 17 PDIs, 2 terrylenes, and 1 quaterrylene into ZL are compared and their interactions with the inner surface of the ZL nanochannels discussed. ZL crystals of about 500 nm in size have been used because they meet the criteria that must be respected for the preparation of antenna composites for light harvesting, transport, and trapping. The photostability of dyes is considerably improved by inserting them into the ZL channels because the guests are protected by being confined. Plugging the channel entrances, so that the guests cannot escape into the environment is a prerequisite for achieving long-term stability of composites embedded in an organic matrix. Successful methods to achieve this goal are described. Finally, the embedding of dye–ZL composites in polymer matrices, while maintaining optical transparency, is reported. These results facilitate the rational design of advanced dye–zeolite composite materials and provide powerful tools for further developing and understanding artificial antenna systems, which are among the most fascinating subjects of current photochemistry and photophysics.
Resumo:
With hundreds of single nucleotide polymorphisms (SNPs) in a candidate gene and millions of SNPs across the genome, selecting an informative subset of SNPs to maximize the ability to detect genotype-phenotype association is of great interest and importance. In addition, with a large number of SNPs, analytic methods are needed that allow investigators to control the false positive rate resulting from large numbers of SNP genotype-phenotype analyses. This dissertation uses simulated data to explore methods for selecting SNPs for genotype-phenotype association studies. I examined the pattern of linkage disequilibrium (LD) across a candidate gene region and used this pattern to aid in localizing a disease-influencing mutation. The results indicate that the r2 measure of linkage disequilibrium is preferred over the common D′ measure for use in genotype-phenotype association studies. Using step-wise linear regression, the best predictor of the quantitative trait was not usually the single functional mutation. Rather it was a SNP that was in high linkage disequilibrium with the functional mutation. Next, I compared three strategies for selecting SNPs for application to phenotype association studies: based on measures of linkage disequilibrium, based on a measure of haplotype diversity, and random selection. The results demonstrate that SNPs selected based on maximum haplotype diversity are more informative and yield higher power than randomly selected SNPs or SNPs selected based on low pair-wise LD. The data also indicate that for genes with small contribution to the phenotype, it is more prudent for investigators to increase their sample size than to continuously increase the number of SNPs in order to improve statistical power. When typing large numbers of SNPs, researchers are faced with the challenge of utilizing an appropriate statistical method that controls the type I error rate while maintaining adequate power. We show that an empirical genotype based multi-locus global test that uses permutation testing to investigate the null distribution of the maximum test statistic maintains a desired overall type I error rate while not overly sacrificing statistical power. The results also show that when the penetrance model is simple the multi-locus global test does as well or better than the haplotype analysis. However, for more complex models, haplotype analyses offer advantages. The results of this dissertation will be of utility to human geneticists designing large-scale multi-locus genotype-phenotype association studies. ^