999 resultados para Statistical Literacy
Resumo:
This paper proposes a template for modelling complex datasets that integrates traditional statistical modelling approaches with more recent advances in statistics and modelling through an exploratory framework. Our approach builds on the well-known and long standing traditional idea of 'good practice in statistics' by establishing a comprehensive framework for modelling that focuses on exploration, prediction, interpretation and reliability assessment, a relatively new idea that allows individual assessment of predictions. The integrated framework we present comprises two stages. The first involves the use of exploratory methods to help visually understand the data and identify a parsimonious set of explanatory variables. The second encompasses a two step modelling process, where the use of non-parametric methods such as decision trees and generalized additive models are promoted to identify important variables and their modelling relationship with the response before a final predictive model is considered. We focus on fitting the predictive model using parametric, non-parametric and Bayesian approaches. This paper is motivated by a medical problem where interest focuses on developing a risk stratification system for morbidity of 1,710 cardiac patients given a suite of demographic, clinical and preoperative variables. Although the methods we use are applied specifically to this case study, these methods can be applied across any field, irrespective of the type of response.
Resumo:
The effect of number of samples and selection of data for analysis on the calculation of surface motor unit potential (SMUP) size in the statistical method of motor unit number estimates (MUNE) was determined in 10 normal subjects and 10 with amyotrophic lateral sclerosis (ALS). We recorded 500 sequential compound muscle action potentials (CMAPs) at three different stable stimulus intensities (10–50% of maximal CMAP). Estimated mean SMUP sizes were calculated using Poisson statistical assumptions from the variance of 500 sequential CMAP obtained at each stimulus intensity. The results with the 500 data points were compared with smaller subsets from the same data set. The results using a range of 50–80% of the 500 data points were compared with the full 500. The effect of restricting analysis to data between 5–20% of the CMAP and to standard deviation limits was also assessed. No differences in mean SMUP size were found with stimulus intensity or use of different ranges of data. Consistency was improved with a greater sample number. Data within 5% of CMAP size gave both increased consistency and reduced mean SMUP size in many subjects, but excluded valid responses present at that stimulus intensity. These changes were more prominent in ALS patients in whom the presence of isolated SMUP responses was a striking difference from normal subjects. Noise, spurious data, and large SMUP limited the Poisson assumptions. When these factors are considered, consistent statistical MUNE can be calculated from a continuous sequence of data points. A 2 to 2.5 SD or 10% window are reasonable methods of limiting data for analysis. Muscle Nerve 27: 320–331, 2003
Resumo:
O objetivo da pesquisa é descrever o cenário da imigração germânica no século XIX, tratando também, de trilhar o caminho da educação por meio de educadores e das comunidades na Colônia de Santa Leopoldina, criada em 1857, na Província do Espírito Santo, considerando 50 anos a partir do início da imigração (1857-1907). A pesquisa histórica ancorou-se em consultas a documentos oficiais do Império, documentos e relatórios oficiais dos Presidentes da Província do Espírito Santo, jornais de circulação da época, fotografias, mapas geográficos, mapas estatísticos, livros didáticos antigos e livros de autores que abordaram o processo imigratório do Espírito Santo. O diálogo com Marc Bloch ajudou a entender essa multiplicidade de documentos, na complexa relação entre o passado e o presente da educação do Espírito Santo. O trabalho apresenta uma trajetória do imigrante germânico vindo da Europa até a ex-colônia e como a administração da província tratou a imigração e a educação. O ensino primário iniciou-se a partir de iniciativas públicas e particulares. Foram identificados as primeiras escolas e os nomes de muitos professores que atuaram no período estudado. Os livros utilizados nas escolas das comunidades teuto-brasileiras eram oriundos da Alemanha e posteriormente foram produzidos no sul do Brasil. Entre os livros escritos em alemão encontrados destacam-se o livro texto de alfabetização „Lese – Schule I für Deutsche Kinder in Brasilien‟, editado na Alemanha no final do século XIX, de autoria do diretor de uma escola particular em Santa Leopoldina, Albert Richard Dietze, e o primeiro volume do livro de matemática „Rechenbuch für Deutsch-Brasilianische Volksschulen‟, de Ferdinand Hackbart, Konrad Glaus e Hermann Lange. Foi feita uma análise sob os aspectos físicos e formais, o processo de ensino e os conteúdos do livro de matemática. A análise evidenciou que a proposta de ensino apoia-se no “cálculo mental” e o escrito, com a repetição dos conteúdos, envolvendo o treino intensivo. Os conceitos de matemática são apresentados partindo de problemas com situações concretas do aluno para a aquisição de competências necessárias para inserir o aluno em sua comunidade. Levando-se em conta que o livro foi editado em 1906, conclui-se que se trata de uma obra relevante com uma proposta de ensino que se manteve presente nos livros didáticos de matemática até os dias atuais.
Resumo:
A growing number of predicting corporate failure models has emerged since 60s. Economic and social consequences of business failure can be dramatic, thus it is not surprise that the issue has been of growing interest in academic research as well as in business context. The main purpose of this study is to compare the predictive ability of five developed models based on three statistical techniques (Discriminant Analysis, Logit and Probit) and two models based on Artificial Intelligence (Neural Networks and Rough Sets). The five models were employed to a dataset of 420 non-bankrupt firms and 125 bankrupt firms belonging to the textile and clothing industry, over the period 2003–09. Results show that all the models performed well, with an overall correct classification level higher than 90%, and a type II error always less than 2%. The type I error increases as we move away from the year prior to failure. Our models contribute to the discussion of corporate financial distress causes. Moreover it can be used to assist decisions of creditors, investors and auditors. Additionally, this research can be of great contribution to devisers of national economic policies that aim to reduce industrial unemployment.
Resumo:
A growing number of predicting corporate failure models has emerged since 60s. Economic and social consequences of business failure can be dramatic, thus it is not surprise that the issue has been of growing interest in academic research as well as in business context. The main purpose of this study is to compare the predictive ability of five developed models based on three statistical techniques (Discriminant Analysis, Logit and Probit) and two models based on Artificial Intelligence (Neural Networks and Rough Sets). The five models were employed to a dataset of 420 non-bankrupt firms and 125 bankrupt firms belonging to the textile and clothing industry, over the period 2003–09. Results show that all the models performed well, with an overall correct classification level higher than 90%, and a type II error always less than 2%. The type I error increases as we move away from the year prior to failure. Our models contribute to the discussion of corporate financial distress causes. Moreover it can be used to assist decisions of creditors, investors and auditors. Additionally, this research can be of great contribution to devisers of national economic policies that aim to reduce industrial unemployment.
Resumo:
Low noise surfaces have been increasingly considered as a viable and cost-effective alternative to acoustical barriers. However, road planners and administrators frequently lack information on the correlation between the type of road surface and the resulting noise emission profile. To address this problem, a method to identify and classify different types of road pavements was developed, whereby near field road noise is analyzed using statistical learning methods. The vehicle rolling sound signal near the tires and close to the road surface was acquired by two microphones in a special arrangement which implements the Close-Proximity method. A set of features, characterizing the properties of the road pavement, was extracted from the corresponding sound profiles. A feature selection method was used to automatically select those that are most relevant in predicting the type of pavement, while reducing the computational cost. A set of different types of road pavement segments were tested and the performance of the classifier was evaluated. Results of pavement classification performed during a road journey are presented on a map, together with geographical data. This procedure leads to a considerable improvement in the quality of road pavement noise data, thereby increasing the accuracy of road traffic noise prediction models.
Resumo:
Wyner - Ziv (WZ) video coding is a particular case of distributed video coding (DVC), the recent video coding paradigm based on the Slepian - Wolf and Wyner - Ziv theorems which exploits the source temporal correlation at the decoder and not at the encoder as in predictive video coding. Although some progress has been made in the last years, WZ video coding is still far from the compression performance of predictive video coding, especially for high and complex motion contents. The WZ video codec adopted in this study is based on a transform domain WZ video coding architecture with feedback channel-driven rate control, whose modules have been improved with some recent coding tools. This study proposes a novel motion learning approach to successively improve the rate-distortion (RD) performance of the WZ video codec as the decoding proceeds, making use of the already decoded transform bands to improve the decoding process for the remaining transform bands. The results obtained reveal gains up to 2.3 dB in the RD curves against the performance for the same codec without the proposed motion learning approach for high motion sequences and long group of pictures (GOP) sizes.
Resumo:
Resumo: Este artigo analisa a relação entre o nível de consciência fonológica, conhecimento das letra e as estratégias utilizadas para ler e escrever, em crianças de cinco anos, ensinadas em catalão. Participaram 69 crianças de três classes diferentes. Cada um dos seus professores utilizava um método diferente de ensino: analítico, sintético ou analítico-sintético. As crianças foram avaliadas no início e no final do ano letivo em: Reconhecimento de letras, segmentação palavra oral, leitura de palavras, leitura de um texto curto e um ditado. Foram realizadas análises de granulação fina em nas respostas das crianças, para identificar estratégias e padrões específicos. A análise qualitativa indica que a capacidade de segmentar uma palavra em sílabas por via oral parece ser suficiente para as crianças começarem a ler de uma forma convencional. Além disso, a consciência fonológica e o conhecimento das letras são usados em formas relativamente diferentes, dependendo do tipo de texto a ser lido. As abordagens de ensino dos professores parecem ter uma influência nos resultados das crianças.