36 resultados para Using concept maps
em Biblioteca Digital da Produção Intelectual da Universidade de São Paulo (BDPI/USP)
Resumo:
Scientific literacy can be considered as a new demand of post-industrial society. It seems necessary in order to foster education for sustainability throughout students` academic careers. Universities striving to teach sustainability are being challenged to integrate a holistic perspective into a traditional undergraduate curriculum, which aims at specialization. This new integrative, inter- and transdisciplinary epistemological approach is necessary to cultivate autonomous citizenship, i.e., that each citizen be prepared to understand and participate in discussions about the complex contemporary issues posed by post-industrial society. This paper presents an epistemological framework to show the role of scientific literacy in fostering education for sustainability. We present a set of 26 collaborative concept maps (CCmaps) in order to illustrate an instance of theory becoming practice. During a required course for first-year undergraduate students (ACH 0011, Natural Sciences), climate change was presented and discussed in broad perspective by using CCmaps. We present students` CCmaps to show how they use concepts from quantitative and literacy disciplines to deal with the challenges posed by the need of achieving a sustainable development. (C) 2009 Elsevier Ltd. All rights reserved.
Resumo:
The understanding of complex physiological processes requires information from many different areas of knowledge. To meet this interdisciplinary scenario, the ability of integrating and articulating information is demanded. The difficulty of such approach arises because, more often than not, information is fragmented through under graduation education in Health Sciences. Shifting from a fragmentary and deep view of many topics to joining them horizontally in a global view is not a trivial task for teachers to implement. To attain that objective we proposed a course herein described Biochemistry of the envenomation response aimed at integrating previous contents of Health Sciences courses, following international recommendations of interdisciplinary model. The contents were organized by modules with increasing topic complexity. The full understanding of the envenoming pathophysiology of each module would be attained by the integration of knowledge from different disciplines. Active-learning strategy was employed focusing concept map drawing. Evaluation was obtained by a 30-item Likert-type survey answered by ninety students; 84% of the students considered that the number of relations that they were able to establish as seen by concept maps increased throughout the course. Similarly, 98% considered that both the theme and the strategy adopted in the course contributed to develop an interdisciplinary view.
Resumo:
Radiation dose calculations in nuclear medicine depend on quantification of activity via planar and/or tomographic imaging methods. However, both methods have inherent limitations, and the accuracy of activity estimates varies with object size, background levels, and other variables. The goal of this study was to evaluate the limitations of quantitative imaging with planar and single photon emission computed tomography (SPECT) approaches, with a focus on activity quantification for use in calculating absorbed dose estimates for normal organs and tumors. To do this we studied a series of phantoms of varying complexity of geometry, with three radionuclides whose decay schemes varied from simple to complex. Four aqueous concentrations of (99m)Tc, (131)I, and (111)In (74, 185, 370, and 740 kBq mL(-1)) were placed in spheres of four different sizes in a water-filled phantom, with three different levels of activity in the surrounding water. Planar and SPECT images of the phantoms were obtained on a modern SPECT/computed tomography (CT) system. These radionuclides and concentration/background studies were repeated using a cardiac phantom and a modified torso phantom with liver and ""tumor"" regions containing the radionuclide concentrations and with the same varying background levels. Planar quantification was performed using the geometric mean approach, with attenuation correction (AC), and with and without scatter corrections (SC and NSC). SPECT images were reconstructed using attenuation maps (AM) for AC; scatter windows were used to perform SC during image reconstruction. For spherical sources with corrected data, good accuracy was observed (generally within +/- 10% of known values) for the largest sphere (11.5 mL) and for both planar and SPECT methods with (99m)Tc and (131)I, but were poorest and deviated from known values for smaller objects, most notably for (111)In. SPECT quantification was affected by the partial volume effect in smaller objects and generally showed larger errors than the planar results in these cases for all radionuclides. For the cardiac phantom, results were the most accurate of all of the experiments for all radionuclides. Background subtraction was an important factor influencing these results. The contribution of scattered photons was important in quantification with (131)I; if scatter was not accounted for, activity tended to be overestimated using planar quantification methods. For the torso phantom experiments, results show a clear underestimation of activity when compared to previous experiment with spherical sources for all radionuclides. Despite some variations that were observed as the level of background increased, the SPECT results were more consistent across different activity concentrations. Planar or SPECT quantification on state-of-the-art gamma cameras with appropriate quantitative processing can provide accuracies of better than 10% for large objects and modest target-to-background concentrations; however when smaller objects are used, in the presence of higher background, and for nuclides with more complex decay schemes, SPECT quantification methods generally produce better results. Health Phys. 99(5):688-701; 2010
Resumo:
OBJECTIVES This study aimed at analyzing the association between myocardial perfusion changes and the progression of left ventricular systolic dysfunction in patients with chronic Chagas` cardiomyopathy (CCC). BACKGROUND Pathological and experimental studies have suggested that coronary microvascular derangement, and consequent myocardial perfusion disturbance, may cause myocardial damage in CCC. METHODS Patients with CCC (n = 36, ages 57 +/- 10 years, 17 males), previously having undergone myocardial perfusion single-positron emission computed tomography and 2-dimensional echocardiography, prospectively underwent a new evaluation after an interval of 5.6 +/- 1.5 years. Stress and rest myocardial perfusion defects were quantified using polar maps and normal database comparison. RESULTS Between the first and final evaluations, a significant reduction of left ventricular ejection fraction was observed (55 +/- 11% and 50 +/- 13%, respectively; p = 0.0001), as well as an increase in the area of the perfusion defect at rest (18.8 +/- 14.1% and 26.5 +/- 19.1%, respectively; p = 0.0075). The individual increase in the perfusion defect area at rest was significantly correlated with the reduction in left ventricular ejection fraction (R = 0.4211, p = 0.0105). Twenty patients with normal coronary arteries (56%) showed reversible perfusion defects involving 10.2 +/- 9.7% of the left ventricle. A significant topographic correlation was found between reversible defects and the appearance of new rest perfusion defects at the final evaluation. Of the 47 segments presenting reversible perfusion defects in the initial study, 32 (68%) progressed to perfusion defects at rest, and of the 469 segments not showing reversibility in the initial study, only 41 (8.7%) had the same progression (p < 0.0001, Fisher exact test). CONCLUSIONS In CCC patients, the progression of left ventricular systolic dysfunction was associated with both the presence of reversible perfusion defects and the increase in perfusion defects at rest. These results support the notion that myocardial perfusion disturbances participate in the pathogenesis of myocardial injury in CCC. (J Am Coll Cardiol Img 2009;2:164-72) (c) 2009 by the American College of Cardiology Foundation
Resumo:
Some factors complicate comparisons between linkage maps from different studies. This problem can be resolved if measures of precision, such as confidence intervals and frequency distributions, are associated with markers. We examined the precision of distances and ordering of microsatellite markers in the consensus linkage maps of chromosomes 1, 3 and 4 from two F 2 reciprocal Brazilian chicken populations, using bootstrap sampling. Single and consensus maps were constructed. The consensus map was compared with the International Consensus Linkage Map and with the whole genome sequence. Some loci showed segregation distortion and missing data, but this did not affect the analyses negatively. Several inversions and position shifts were detected, based on 95% confidence intervals and frequency distributions of loci. Some discrepancies in distances between loci and in ordering were due to chance, whereas others could be attributed to other effects, including reciprocal crosses, sampling error of the founder animals from the two populations, F(2) population structure, number of and distance between microsatellite markers, number of informative meioses, loci segregation patterns, and sex. In the Brazilian consensus GGA1, locus LEI1038 was in a position closer to the true genome sequence than in the International Consensus Map, whereas for GGA3 and GGA4, no such differences were found. Extending these analyses to the remaining chromosomes should facilitate comparisons and the integration of several available genetic maps, allowing meta-analyses for map construction and quantitative trait loci (QTL) mapping. The precision of the estimates of QTL positions and their effects would be increased with such information.
Resumo:
PURPOSE: The main goal of this study was to develop and compare two different techniques for classification of specific types of corneal shapes when Zernike coefficients are used as inputs. A feed-forward artificial Neural Network (NN) and discriminant analysis (DA) techniques were used. METHODS: The inputs both for the NN and DA were the first 15 standard Zernike coefficients for 80 previously classified corneal elevation data files from an Eyesys System 2000 Videokeratograph (VK), installed at the Departamento de Oftalmologia of the Escola Paulista de Medicina, São Paulo. The NN had 5 output neurons which were associated with 5 typical corneal shapes: keratoconus, with-the-rule astigmatism, against-the-rule astigmatism, "regular" or "normal" shape and post-PRK. RESULTS: The NN and DA responses were statistically analyzed in terms of precision ([true positive+true negative]/total number of cases). Mean overall results for all cases for the NN and DA techniques were, respectively, 94% and 84.8%. CONCLUSION: Although we used a relatively small database, results obtained in the present study indicate that Zernike polynomials as descriptors of corneal shape may be a reliable parameter as input data for diagnostic automation of VK maps, using either NN or DA.
Resumo:
We describe finite sets of points, called sentinels, which allow us to decide if isometric copies of polygons, convex or not, intersect. As an example of the applicability of the concept of sentinel, we explain how they can be used to formulate an algorithm based on the optimization of differentiable models to pack polygons in convex sets. Mathematical subject classification: 90C53, 65K05.
Resumo:
Oscillator networks have been developed in order to perform specific tasks related to image processing. Here we analytically investigate the existence of synchronism in a pair of phase oscillators that are short-range dynamically coupled. Then, we use these analytical results to design a network able of detecting border of black-and-white figures. Each unit composing this network is a pair of such phase oscillators and is assigned to a pixel in the image. The couplings among the units forming the network are also dynamical. Border detection emerges from the network activity.
Resumo:
Nucleoside hydrolases (NHs) show homology among parasite protozoa, fungi and bacteria. They are vital protagonists in the establishment of early infection and, therefore, are excellent candidates for the pathogen recognition by adaptive immune responses. Immune protection against NHs would prevent disease at the early infection of several pathogens. We have identified the domain of the NH of L. donovani (NH36) responsible for its immunogenicity and protective efficacy against murine visceral leishmaniasis (VL). Using recombinant generated peptides covering the whole NH36 sequence and saponin we demonstrate that protection against L. chagasi is related to its C-terminal domain (amino-acids 199-314) and is mediated mainly by a CD4+ T cell driven response with a lower contribution of CD8+ T cells. Immunization with this peptide exceeds in 36.73 +/- 12.33% the protective response induced by the cognate NH36 protein. Increases in IgM, IgG2a, IgG1 and IgG2b antibodies, CD4+ T cell proportions, IFN-gamma secretion, ratios of IFN-gamma/IL-10 producing CD4+ and CD8+ T cells and percents of antibody binding inhibition by synthetic predicted epitopes were detected in F3 vaccinated mice. The increases in DTH and in ratios of TNF alpha/IL-10 CD4+ producing cells were however the strong correlates of protection which was confirmed by in vivo depletion with monoclonal antibodies, algorithm predicted CD4 and CD8 epitopes and a pronounced decrease in parasite load (90.5-88.23%; p = 0.011) that was long-lasting. No decrease in parasite load was detected after vaccination with the N-domain of NH36, in spite of the induction of IFN-gamma/IL-10 expression by CD4+ T cells after challenge. Both peptides reduced the size of footpad lesions, but only the C-domain reduced the parasite load of mice challenged with L. amazonensis. The identification of the target of the immune response to NH36 represents a basis for the rationale development of a bivalent vaccine against leishmaniasis and for multivalent vaccines against NHs-dependent pathogens.
Resumo:
The HR Del nova remnant was observed with the IFU-GMOS at Gemini North. The spatially resolved spectral data cube was used in the kinematic, morphological, and abundance analysis of the ejecta. The line maps show a very clumpy shell with two main symmetric structures. The first one is the outer part of the shell seen in H alpha, which forms two rings projected in the sky plane. These ring structures correspond to a closed hourglass shape, first proposed by Harman & O'Brien. The equatorial emission enhancement is caused by the superimposed hourglass structures in the line of sight. The second structure seen only in the [O III] and [N II] maps is located along the polar directions inside the hourglass structure. Abundance gradients between the polar caps and equatorial region were not found. However, the outer part of the shell seems to be less abundant in oxygen and nitrogen than the inner regions. Detailed 2.5-dimensional photoionization modeling of the three-dimensional shell was performed using the mass distribution inferred from the observations and the presence of mass clumps. The resulting model grids are used to constrain the physical properties of the shell as well as the central ionizing source. A sequence of three-dimensional clumpy models including a disk-shaped ionization source is able to reproduce the ionization gradients between polar and equatorial regions of the shell. Differences between shell axial ratios in different lines can also be explained by aspherical illumination. A total shell mass of 9 x 10(-4) M(circle dot) is derived from these models. We estimate that 50%-70% of the shell mass is contained in neutral clumps with density contrast up to a factor of 30.
Resumo:
Motivation: Understanding the patterns of association between polymorphisms at different loci in a population ( linkage disequilibrium, LD) is of fundamental importance in various genetic studies. Many coefficients were proposed for measuring the degree of LD, but they provide only a static view of the current LD structure. Generative models (GMs) were proposed to go beyond these measures, giving not only a description of the actual LD structure but also a tool to help understanding the process that generated such structure. GMs based in coalescent theory have been the most appealing because they link LD to evolutionary factors. Nevertheless, the inference and parameter estimation of such models is still computationally challenging. Results: We present a more practical method to build GM that describe LD. The method is based on learning weighted Bayesian network structures from haplotype data, extracting equivalence structure classes and using them to model LD. The results obtained in public data from the HapMap database showed that the method is a promising tool for modeling LD. The associations represented by the learned models are correlated with the traditional measure of LD D`. The method was able to represent LD blocks found by standard tools. The granularity of the association blocks and the readability of the models can be controlled in the method. The results suggest that the causality information gained by our method can be useful to tell about the conservability of the genetic markers and to guide the selection of subset of representative markers.
Resumo:
This paper describes the modeling of a weed infestation risk inference system that implements a collaborative inference scheme based on rules extracted from two Bayesian network classifiers. The first Bayesian classifier infers a categorical variable value for the weed-crop competitiveness using as input categorical variables for the total density of weeds and corresponding proportions of narrow and broad-leaved weeds. The inferred categorical variable values for the weed-crop competitiveness along with three other categorical variables extracted from estimated maps for the weed seed production and weed coverage are then used as input for a second Bayesian network classifier to infer categorical variables values for the risk of infestation. Weed biomass and yield loss data samples are used to learn the probability relationship among the nodes of the first and second Bayesian classifiers in a supervised fashion, respectively. For comparison purposes, two types of Bayesian network structures are considered, namely an expert-based Bayesian classifier and a naive Bayes classifier. The inference system focused on the knowledge interpretation by translating a Bayesian classifier into a set of classification rules. The results obtained for the risk inference in a corn-crop field are presented and discussed. (C) 2009 Elsevier Ltd. All rights reserved.
Resumo:
Despite modern weed control practices, weeds continue to be a threat to agricultural production. Considering the variability of weeds, a classification methodology for the risk of infestation in agricultural zones using fuzzy logic is proposed. The inputs for the classification are attributes extracted from estimated maps for weed seed production and weed coverage using kriging and map analysis and from the percentage of surface infested by grass weeds, in order to account for the presence of weed species with a high rate of development and proliferation. The output for the classification predicts the risk of infestation of regions of the field for the next crop. The risk classification methodology described in this paper integrates analysis techniques which may help to reduce costs and improve weed control practices. Results for the risk classification of the infestation in a maize crop field are presented. To illustrate the effectiveness of the proposed system, the risk of infestation over the entire field is checked against the yield loss map estimated by kriging and also with the average yield loss estimated from a hyperbolic model.
Resumo:
In this study, the innovation approach is used to estimate the measurement total error associated with power system state estimation. This is required because the power system equations are very much correlated with each other and as a consequence part of the measurements errors is masked. For that purpose an index, innovation index (II), which provides the quantity of new information a measurement contains is proposed. A critical measurement is the limit case of a measurement with low II, it has a zero II index and its error is totally masked. In other words, that measurement does not bring any innovation for the gross error test. Using the II of a measurement, the masked gross error by the state estimation is recovered; then the total gross error of that measurement is composed. Instead of the classical normalised measurement residual amplitude, the corresponding normalised composed measurement residual amplitude is used in the gross error detection and identification test, but with m degrees of freedom. The gross error processing turns out to be very simple to implement, requiring only few adaptations to the existing state estimation software. The IEEE-14 bus system is used to validate the proposed gross error detection and identification test.
Resumo:
Multifunctional structures are pointed out as an important technology for the design of aircraft with volume, mass, and energy source limitations such as unmanned air vehicles (UAVs) and micro air vehicles (MAVs). In addition to its primary function of bearing aerodynamic loads, the wing/spar structure of an UAV or a MAV with embedded piezoceramics can provide an extra electrical energy source based on the concept of vibration energy harvesting to power small and wireless electronic components. Aeroelastic vibrations of a lifting surface can be converted into electricity using piezoelectric transduction. In this paper, frequency-domain piezoaeroelastic modeling and analysis of a canti-levered platelike wing with embedded piezoceramics is presented for energy harvesting. The electromechanical finite-element plate model is based on the thin-plate (Kirchhoff) assumptions while the unsteady aerodynamic model uses the doublet-lattice method. The electromechanical and aerodynamic models are combined to obtain the piezoaeroelastic equations, which are solved using a p-k scheme that accounts for the electromechanical coupling. The evolution of the aerodynamic damping and the frequency of each mode are obtained with changing airflow speed for a given electrical circuit. Expressions for piezoaeroelastically coupled frequency response functions (voltage, current, and electrical power as well the vibratory motion) are also defined by combining flow excitation with harmonic base excitation. Hence, piezoaeroelastic evolution can be investigated in frequency domain for different airflow speeds and electrical boundary conditions. [DOI:10.1115/1.4002785]