861 resultados para Curricular Support Data Analysis


Relevância:

100.00% 100.00%

Publicador:

Resumo:

Nesse trabalho investigamos de que maneira a escola vem desenvolvendo as questões referentes à educação ambiental enquanto tema transversal e interdisciplinar. Levantamos informações sobre como a Educação Ambiental vem sendo desenvolvida na prática pedagógica de uma escola da rede pública estadual da cidade de Mossoró/RN - Brasil. Onde foi identificada a percepção dos atores envolvidos no processo de educação ambiental, a saber: o nível de consciência ecológica, manifestada pelos alunos, suas práticas em relação aos problemas ambientais vivenciados; a abordagem do docente, frente à temática; bem como a percepção dos representantes do poder, como o professor, a diretoria da escola, a secretária de educação do estado e gerente de meio ambiente do município de Mossoró – RN. Contando com um apanhado bibliográfico com autores como Saviani (2008), Dias (2004), Gadotti (2008), Paulo Freire (1991), Sato (2012), Loureiro (2004), Leff (2010), entre outros. Para essa investigação utilizamos uma abordagem qualitativa e quantitativa, sendo desenvolvido 4 entrevistas com os representantes do poder e um questionário que foi aplicado com os alunos da escola, depois de respondidos esses dados foram tabulados em planilhas do Excel a fim de serem lançados para análises estatísticas, logo em seguida foram tratados através da construção de um banco de dados na planilha eletrônica Microsoft Excel. Após a digitação da base de dados, o banco foi exportado para o software SPSS versão 13.0 no qual foi realizada a análise. Para análise dos dados foram calculadas as frequências observadas e percentuais das percepções dos alunos acerca do julgamento, procedimentos utilizados pela escola, itens associados, problemas e temas relacionados ao meio ambiente. Além das frequências calculadas foram construídos os gráficos para cada distribuição. Já a análise qualitativa de conteúdo possui como estratégia de análise a interpretação qualitativa de emparelhamento de dados. Percebemos que nossos sujeitos acreditam que a educação ambiental vem como instrumento para modificação de comportamentos humanos, é através da educação que modificamos atitudes e conscientizamos a nossa população aos cuidados para com o nosso planeta. Nesta investigação identificamos que 83,1% dos alunos disseram estar bastante consciente da problemática ambiental, e ainda, 71,8% dos discentes disseram que estão bastante motivados para desenvolver projetos de educação ambiental na sua escola. Todavia não foi constatado isso pelos representantes do poder os quais afirma que esses não possuem o nível de consciência ecológica identificada pelos alunos, podendo perceber uma visão crítica por parte dos representantes do poder a respeito da temática, diferente dos discentes que dizem ter consciência, contudo suas práticas não condizem com a realidade. Acreditamos que se a Educação Ambiental fosse introduzida como componente curricular obrigatória essa poderia ser trabalhada de maneira mais direta e contundente a fim de formamos cidadãos verdadeiramente consciente da questão ambiental, uma vez que essa deve ir além dos muros da escola, a questão ambiental é uma questão também social, necessitamos de intervenções a nível global afim de todos contribuírem de maneira significativa para sustentabilidade.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

A microcirculação cutânea surgiu, nos últimos anos, como uma alternativa pratica e acessível para o estudo da circulação periférica. Técnicas não-invasivas, como a Fluxometria por Laser Doppler (FLD), a Evaporimetria e a Gasimetria transcutânea em associação a testes de provocação têm transformado a circulação cutânea num atraente modelo de investigação. Este estudo foi aplicado a um grupo de voluntárias jovens saudáveis (n = 8, (21,6 ± 2,6) anos) respirando uma atmosfera de 100 % oxigénio durante 10 minutos. Este teste permitiu-nos avaliar a resposta circulatória na microcirculação do membro inferior. As técnicas de medição incluíram o fluxo sanguíneo local por FLD, a pO2 transcutânea (tc) e a Perda Transepidérmica de Água (PTEA) por evaporimetria. A análise de dados revela que tc-pO2 e FLD se alteraram significativamente durante o teste. Um perfil de evolução recíproca foi registrado para FLD e PTEA, que parece apoiar dados anteriores de que as alterações no fluxo sanguíneo local podem influenciar a função de “barreira” epidérmica. Este modelo parece adequado para caracterizar a microcirculação do membro inferior in vivo.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Virtual globe technology holds many exciting possibilities for environmental science. These easy-to-use, intuitive systems provide means for simultaneously visualizing four-dimensional environmental data from many different sources, enabling the generation of new hypotheses and driving greater understanding of the Earth system. Through the use of simple markup languages, scientists can publish and consume data in interoperable formats without the need for technical assistance. In this paper we give, with examples from our own work, a number of scientific uses for virtual globes, demonstrating their particular advantages. We explain how we have used Web Services to connect virtual globes with diverse data sources and enable more sophisticated usage such as data analysis and collaborative visualization. We also discuss the current limitations of the technology, with particular regard to the visualization of subsurface data and vertical sections.

Relevância:

100.00% 100.00%

Publicador:

Relevância:

100.00% 100.00%

Publicador:

Resumo:

While over-dispersion in capture–recapture studies is well known to lead to poor estimation of population size, current diagnostic tools to detect the presence of heterogeneity have not been specifically developed for capture–recapture studies. To address this, a simple and efficient method of testing for over-dispersion in zero-truncated count data is developed and evaluated. The proposed method generalizes an over-dispersion test previously suggested for un-truncated count data and may also be used for testing residual over-dispersion in zero-inflation data. Simulations suggest that the asymptotic distribution of the test statistic is standard normal and that this approximation is also reasonable for small sample sizes. The method is also shown to be more efficient than an existing test for over-dispersion adapted for the capture–recapture setting. Studies with zero-truncated and zero-inflated count data are used to illustrate the test procedures.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In survival analysis frailty is often used to model heterogeneity between individuals or correlation within clusters. Typically frailty is taken to be a continuous random effect, yielding a continuous mixture distribution for survival times. A Bayesian analysis of a correlated frailty model is discussed in the context of inverse Gaussian frailty. An MCMC approach is adopted and the deviance information criterion is used to compare models. As an illustration of the approach a bivariate data set of corneal graft survival times is analysed. (C) 2006 Elsevier B.V. All rights reserved.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

A wireless sensor network (WSN) is a group of sensors linked by wireless medium to perform distributed sensing tasks. WSNs have attracted a wide interest from academia and industry alike due to their diversity of applications, including home automation, smart environment, and emergency services, in various buildings. The primary goal of a WSN is to collect data sensed by sensors. These data are characteristic of being heavily noisy, exhibiting temporal and spatial correlation. In order to extract useful information from such data, as this paper will demonstrate, people need to utilise various techniques to analyse the data. Data mining is a process in which a wide spectrum of data analysis methods is used. It is applied in the paper to analyse data collected from WSNs monitoring an indoor environment in a building. A case study is given to demonstrate how data mining can be used to optimise the use of the office space in a building.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The ability to display and inspect powder diffraction data quickly and efficiently is a central part of the data analysis process. Whilst many computer programs are capable of displaying powder data, their focus is typically on advanced operations such as structure solution or Rietveld refinement. This article describes a lightweight software package, Jpowder, whose focus is fast and convenient visualization and comparison of powder data sets in a variety of formats from computers with network access. Jpowder is written in Java and uses its associated Web Start technology to allow ‘single-click deployment’ from a web page, http://www.jpowder.org. Jpowder is open source, free and available for use by anyone.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Growing interest in bilingual education in sub-Saharan Africa has highlighted an urgent need for reading material in African languages. In this paper, we focus on authors, one of several groups of stakeholders with responsibility for meeting this demand. We address three main issues: the nature and extent of African language publishing for children; the challenges for authors; and the available support. Our analysis is based on interviews and focus group discussions with publishers, authors, translators, educationalists, and representatives of book promotion organisations from nine African countries and documentary data on children's books in African languages in South Africa. Although there is evidence of a growing interest in producing books in local languages, the number of titles is constrained by funding. The challenges for authors include the need to understand the ingredients for successful children's books and for the sensitivity necessary to negotiate the linguistic challenges associated with a newly emergent genre in African languages. Support, in the form of competitions and workshops, relies on external funding and expertise and offers only temporary solutions. We finish with suggestions for more sustainable ways forward.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Value chain studies, including production system and market chain studies, are essential to value chain analysis, which when coupled with disease risk analysis is a powerful tool to identify key constraints and opportunities for disease control based on risk management in a livestock production and marketing system. Several production system and market chain studies have been conducted to support disease control interventions in South East Asia. This practical aid summarizes experiences and lessons learned from the implementation of such value chain studies in South East Asia. Based on these experiences it prioritizes the required data for the respective purpose of a value chain study and recommends data collection as well as data analysis tools. This practical aid is intended as an adjunct to the FAO value chain approach and animal diseases risk management guidelines document. Further practical advice is provided for more effective use of value chain studies in South and South East Asia as part of animal health decision support.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The organization of non-crystalline polymeric materials at a local level, namely on a spatial scale between a few and 100 a, is still unclear in many respects. The determination of the local structure in terms of the configuration and conformation of the polymer chain and of the packing characteristics of the chain in the bulk material represents a challenging problem. Data from wide-angle diffraction experiments are very difficult to interpret due to the very large amount of information that they carry, that is the large number of correlations present in the diffraction patterns.We describe new approaches that permit a detailed analysis of the complex neutron diffraction patterns characterizing polymer melts and glasses. The coupling of different computer modelling strategies with neutron scattering data over a wide Q range allows the extraction of detailed quantitative information on the structural arrangements of the materials of interest. Proceeding from modelling routes as diverse as force field calculations, single-chain modelling and reverse Monte Carlo, we show the successes and pitfalls of each approach in describing model systems, which illustrate the need to attack the data analysis problem simultaneously from several fronts.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Background: Microarray based comparative genomic hybridisation (CGH) experiments have been used to study numerous biological problems including understanding genome plasticity in pathogenic bacteria. Typically such experiments produce large data sets that are difficult for biologists to handle. Although there are some programmes available for interpretation of bacterial transcriptomics data and CGH microarray data for looking at genetic stability in oncogenes, there are none specifically to understand the mosaic nature of bacterial genomes. Consequently a bottle neck still persists in accurate processing and mathematical analysis of these data. To address this shortfall we have produced a simple and robust CGH microarray data analysis process that may be automated in the future to understand bacterial genomic diversity. Results: The process involves five steps: cleaning, normalisation, estimating gene presence and absence or divergence, validation, and analysis of data from test against three reference strains simultaneously. Each stage of the process is described and we have compared a number of methods available for characterising bacterial genomic diversity, for calculating the cut-off between gene presence and absence or divergence, and shown that a simple dynamic approach using a kernel density estimator performed better than both established, as well as a more sophisticated mixture modelling technique. We have also shown that current methods commonly used for CGH microarray analysis in tumour and cancer cell lines are not appropriate for analysing our data. Conclusion: After carrying out the analysis and validation for three sequenced Escherichia coli strains, CGH microarray data from 19 E. coli O157 pathogenic test strains were used to demonstrate the benefits of applying this simple and robust process to CGH microarray studies using bacterial genomes.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The impending threat of global climate change and its regional manifestations is among the most important and urgent problems facing humanity. Society needs accurate and reliable estimates of changes in the probability of regional weather variations to develop science-based adaptation and mitigation strategies. Recent advances in weather prediction and in our understanding and ability to model the climate system suggest that it is both necessary and possible to revolutionize climate prediction to meet these societal needs. However, the scientific workforce and the computational capability required to bring about such a revolution is not available in any single nation. Motivated by the success of internationally funded infrastructure in other areas of science, this paper argues that, because of the complexity of the climate system, and because the regional manifestations of climate change are mainly through changes in the statistics of regional weather variations, the scientific and computational requirements to predict its behavior reliably are so enormous that the nations of the world should create a small number of multinational high-performance computing facilities dedicated to the grand challenges of developing the capabilities to predict climate variability and change on both global and regional scales over the coming decades. Such facilities will play a key role in the development of next-generation climate models, build global capacity in climate research, nurture a highly trained workforce, and engage the global user community, policy-makers, and stakeholders. We recommend the creation of a small number of multinational facilities with computer capability at each facility of about 20 peta-flops in the near term, about 200 petaflops within five years, and 1 exaflop by the end of the next decade. Each facility should have sufficient scientific workforce to develop and maintain the software and data analysis infrastructure. Such facilities will enable questions of what resolution, both horizontal and vertical, in atmospheric and ocean models, is necessary for more confident predictions at the regional and local level. Current limitations in computing power have placed severe limitations on such an investigation, which is now badly needed. These facilities will also provide the world's scientists with the computational laboratories for fundamental research on weather–climate interactions using 1-km resolution models and on atmospheric, terrestrial, cryospheric, and oceanic processes at even finer scales. Each facility should have enabling infrastructure including hardware, software, and data analysis support, and scientific capacity to interact with the national centers and other visitors. This will accelerate our understanding of how the climate system works and how to model it. It will ultimately enable the climate community to provide society with climate predictions, which are based on our best knowledge of science and the most advanced technology.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The impending threat of global climate change and its regional manifestations is among the most important and urgent problems facing humanity. Society needs accurate and reliable estimates of changes in the probability of regional weather variations to develop science-based adaptation and mitigation strategies. Recent advances in weather prediction and in our understanding and ability to model the climate system suggest that it is both necessary and possible to revolutionize climate prediction to meet these societal needs. However, the scientific workforce and the computational capability required to bring about such a revolution is not available in any single nation. Motivated by the success of internationally funded infrastructure in other areas of science, this paper argues that, because of the complexity of the climate system, and because the regional manifestations of climate change are mainly through changes in the statistics of regional weather variations, the scientific and computational requirements to predict its behavior reliably are so enormous that the nations of the world should create a small number of multinational high-performance computing facilities dedicated to the grand challenges of developing the capabilities to predict climate variability and change on both global and regional scales over the coming decades. Such facilities will play a key role in the development of next-generation climate models, build global capacity in climate research, nurture a highly trained workforce, and engage the global user community, policy-makers, and stakeholders. We recommend the creation of a small number of multinational facilities with computer capability at each facility of about 20 peta-flops in the near term, about 200 petaflops within five years, and 1 exaflop by the end of the next decade. Each facility should have sufficient scientific workforce to develop and maintain the software and data analysis infrastructure. Such facilities will enable questions of what resolution, both horizontal and vertical, in atmospheric and ocean models, is necessary for more confident predictions at the regional and local level. Current limitations in computing power have placed severe limitations on such an investigation, which is now badly needed. These facilities will also provide the world's scientists with the computational laboratories for fundamental research on weather–climate interactions using 1-km resolution models and on atmospheric, terrestrial, cryospheric, and oceanic processes at even finer scales. Each facility should have enabling infrastructure including hardware, software, and data analysis support, and scientific capacity to interact with the national centers and other visitors. This will accelerate our understanding of how the climate system works and how to model it. It will ultimately enable the climate community to provide society with climate predictions, which are based on our best knowledge of science and the most advanced technology.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In the recent years, the area of data mining has been experiencing considerable demand for technologies that extract knowledge from large and complex data sources. There has been substantial commercial interest as well as active research in the area that aim to develop new and improved approaches for extracting information, relationships, and patterns from large datasets. Artificial neural networks (NNs) are popular biologically-inspired intelligent methodologies, whose classification, prediction, and pattern recognition capabilities have been utilized successfully in many areas, including science, engineering, medicine, business, banking, telecommunication, and many other fields. This paper highlights from a data mining perspective the implementation of NN, using supervised and unsupervised learning, for pattern recognition, classification, prediction, and cluster analysis, and focuses the discussion on their usage in bioinformatics and financial data analysis tasks. © 2012 Wiley Periodicals, Inc.