977 resultados para statistical techniques
Resumo:
Tese de Doutoramento, Ciências do Mar, da Terra e do Ambiente, Faculdade de Ciências e Tecnologia, Universidade do Algarve, 2015
Resumo:
O objetivo do estudo foi analisar e interpretar o currículo que é proporcionado nas escolas secundárias da Região de Lisboa e de como o currículo operacional está relacionado com as orientações educacionais (OE) dos professores. Assim, por um lado quisemos saber como as OE se relacionam com os diferentes níveis de currículo e por outro como é que a finalidade curricular de promover estilos de vida ativos nos alunos é percecionada pelos professores. Este objetivo geral deu origem a cinco objetivos específicos de pesquisa que foram estudados em diferentes etapas da investigação numa abordagem quantitativa/qualitativa com a utilização de diferentes técnicas estatísticas. Na etapa extensiva estudou-se as OE de 352 professores de EF de 79 escolas com o ensino secundário geral através do VOI-SF (value orientation inventory – short form) validado através de uma técnica transcultural permitindo encontrar valores de referência das OE para Portugal. Os professores revelaram alta prioridade em Integração ecológica e Auto-realização e baixa prioridade nas restantes OE. O currículo operacional de EF revelou-se essencialmente desportivo e existiram diferenças estatisticamente significativas em relação às variáveis independentes estudadas (idade e experiência profissional). Através de regressões lineares múltiplas comprovou-se que existe uma relação entre as OE e a oferta curricular. Na etapa intensiva estudou-se 14 professores com perfis representativos das suas OE, examinando-se dez dimensões de análise representativas da interpretação e operacionalização do CNEF (análise de conteúdo). Constata-se que as OE influenciam a leitura, interpretação e operacionalização do currículo (e.g., a coeducação no ensino da EF). Dos 14 professores investigados foram selecionados dois professores com perfis de OE opostos para percebermos o comportamento das OE em contexto de sala de aula. Depois das entrevistas, observação e análise do planeamento constatou-se que os dois professores operacionalizam o conhecimento e o ensino/aprendizagem de forma diferenciada e de acordo com as suas OE.
Resumo:
For climate risk management, cumulative distribution functions (CDFs) are an important source of information. They are ideally suited to compare probabilistic forecasts of primary (e.g. rainfall) or secondary data (e.g. crop yields). Summarised as CDFs, such forecasts allow an easy quantitative assessment of possible, alternative actions. Although the degree of uncertainty associated with CDF estimation could influence decisions, such information is rarely provided. Hence, we propose Cox-type regression models (CRMs) as a statistical framework for making inferences on CDFs in climate science. CRMs were designed for modelling probability distributions rather than just mean or median values. This makes the approach appealing for risk assessments where probabilities of extremes are often more informative than central tendency measures. CRMs are semi-parametric approaches originally designed for modelling risks arising from time-to-event data. Here we extend this original concept beyond time-dependent measures to other variables of interest. We also provide tools for estimating CDFs and surrounding uncertainty envelopes from empirical data. These statistical techniques intrinsically account for non-stationarities in time series that might be the result of climate change. This feature makes CRMs attractive candidates to investigate the feasibility of developing rigorous global circulation model (GCM)-CRM interfaces for provision of user-relevant forecasts. To demonstrate the applicability of CRMs, we present two examples for El Ni ? no/Southern Oscillation (ENSO)-based forecasts: the onset date of the wet season (Cairns, Australia) and total wet season rainfall (Quixeramobim, Brazil). This study emphasises the methodological aspects of CRMs rather than discussing merits or limitations of the ENSO-based predictors.
Resumo:
Purpose: To compare oral bioavailability and pharmacokinetic parameters of different lornoxicam formulations and to assess similarity in plasma level profiles by statistical techniques. Methods: An open-label, two-period crossover trial was followed in 24 healthy Pakistani volunteers (22 males, 2 females). Each participant received a single dose of lornoxicam controlled release (CR) microparticles and two doses (morning and evening) of conventional lornoxicam immediate release (IR) tablet formulation. The microparticles were prepared by spray drying method. The formulations were administered again in an alternate manner after a washout period of one week. Pharmacokinetic parameters were determined by Kinetica 4.0 software using plasma concentration-time data. Moreover, data were statistically analyzed at 90 % confidence interval (CI) and Schuirmann’s two one-sided t-test procedure. Results: Peak plasma concentration (Cmax) was 20.2 % lower for CR formulation compared to IR formulation (270.90 ng/ml vs 339.44 ng/ml, respectively) while time taken to attain Cmax (tmax) was 5.25 and 2.08 h, respectively. Area under the plasma drug level versus time (AUC) curve was comparable for both CR and IR formulations. The 90 % confidence interval (CI) values computed for Cmax, AUC0-24, and AUC0-∞ , after log transformation, were 87.21, 108.51 and 102.74 %, respectively, and were within predefined bioequivalence range (80 - 125 %). Conclusion: The findings suggest that CR formulation of lornoxicam did not change the overall pharmacokinetic properties of lornoxicam in terms of extent and rate of lornoxicam absorption.
Resumo:
The purpose of this research was to examine the relationship between teaching readiness and teaching excellence with three variables of preparedness of adjunct professors teaching career technical education courses through student surveys using a correlational design of two statistical techniques; least-squares regression and one-way analysis of variance. That is, the research tested the relationship between teacher readiness and teacher excellence with the number of years teaching, the number of years of experience in the professional field and exposure to teaching related professional development, referred to as variables of preparedness. The results of the research provided insight to the relationship between the variables of preparedness and student assessment of their adjunct professors. Concerning the years of teaching experience, this research found a negative inverse relationship with how students rated their professors’ teaching readiness and excellence. The research also found no relationship between years of professional experience and the students’ assessment. Lastly, the research found a significant positive relationship between the amount of teaching related professional development taken by an adjunct professor and the students’ assessment in teaching readiness and excellence. This research suggests that policies and practices at colleges should address the professional development needs of adjunct professors. Also, to design a model that meets the practices of inclusion for adjunct faculty and to make professional development a priority within the organization. Lastly, implement that model over time to prepare adjuncts in readiness and excellence.
Resumo:
Tropical Rainfall Measuring Mission (TRMM) rainfall retrieval algorithms are evaluated in tropical cyclones (TCs). Differences between the Precipitation Radar (PR) and TRMM Microwave Imager (TMI) retrievals are found to be related to the storm region (inner core vs. rainbands) and the convective nature of the precipitation as measured by radar reflectivity and ice scattering signature. In landfalling TCs, the algorithms perform differently depending on whether the rainfall is located over ocean, land, or coastal surfaces. Various statistical techniques are applied to quantify these differences and identify the discrepancies in rainfall detection and intensity. Ground validation is accomplished by comparing the landfalling storms over the Southeast US to the NEXRAD Multisensor Precipitation Estimates (MPE) Stage-IV product. Numerous recommendations are given to algorithm users and developers for applying and interpreting these algorithms in areas of heavy and widespread tropical rainfall such as tropical cyclones.
Resumo:
The main purpose of this paper is to propose and test a model to assess the degree of conditions favorability in the adoption of agile methods to develop software where traditional methods predominate. In order to achieve this aim, a survey was applied on software developers of a Brazilian public retail bank. Two different statistical techniques were used in order to assess the quantitative data from the closed questions in the survey. The first, exploratory factorial analysis validated the structure of perspectives related to the agile model of the proposed assessment. The second, frequency distribution analysis to categorize the answers. Qualitative data from the survey opened question were analyzed with the technique of qualitative thematic content analysis. As a result, the paper proposes a model to assess the degree of favorability conditions in the adoption of Agile practices within the context of the proposed study.
Resumo:
Purpose – The paper examines from a practitioner’s perspective the European Quality in Social Services (EQUASS) Assurance standard, a certification programme for European social service organisations to implement a sector-specific Quality Management System. In particular, it analyses the adoption motives, the internalisation of the standard, the impacts, the satisfaction and the renew intentions. Design/methodology/approach – This study uses a cross-sectional, questionnaire-based survey methodology. From the 381 organisations emailed, 196 responses coming from eight different European countries were considered valid (51.4%). Data from closed-ended questions were analysed using simple descriptive statistical techniques. Content analysis was employed to analyse practitioner’s comments to open-ended questions. Findings – It shows that social service providers typically implement the certification for internal reasons, and internalise EQUASS principles and practices in daily usage. EQUASS Assurance produces benefits mainly at the operational and customer levels, whereas its main pitfalls include increased workload and bureaucracy. The majority of respondents (85.2%) are very satisfied or satisfied with the certification, suggesting that it meets their expectations. Certification renewal intentions are also high but some respondents report that the final decision depends on several factors. The insights gained through the qualitative data are also described. Practical implications – It can be helpful to managers, consultants and Local License Holders working (or planning to work) with this standard. It can inform the work of the EQUASS Technical Working Group in the forthcoming revision of the standard. Originality/value – This is the largest survey conducted so far about EQUASS Assurance in terms of number of respondents, participating countries and topics covered.
Resumo:
Business angels provide both financing and managerial experience, which increase the likelihood of the survival of innovative start-ups. Over the last years, European countries with developing informal venture capital markets have seen governments support the creation of business angels networks (BANs) to increase and consolidate these markets. Using the Portuguese context to carry out the empirical work, this paper provides an assessment of value added provided by angels’ networks. A total of 88 useable responses were received and analysed using non-parametric statistical techniques. This paper demonstrates that is evidence of positive contribution of BANs in terms of bringing together investors and linking them with entrepreneur’s seeking finance. BANs played an important role in financing innovative start-ups also in peripheral regions. Results lead us to conclude that government support BANs would appear to be an effective mechanism to stimulate the angel market in developing informal venture capital markets. The conclusions of this paper are likely to have relevance for countries where there is growing interest in the potential of business angels as a means of financing innovative start-ups.
Resumo:
Today’s data are increasingly complex and classical statistical techniques need growingly more refined mathematical tools to be able to model and investigate them. Paradigmatic situations are represented by data which need to be considered up to some kind of trans- formation and all those circumstances in which the analyst finds himself in the need of defining a general concept of shape. Topological Data Analysis (TDA) is a field which is fundamentally contributing to such challenges by extracting topological information from data with a plethora of interpretable and computationally accessible pipelines. We con- tribute to this field by developing a series of novel tools, techniques and applications to work with a particular topological summary called merge tree. To analyze sets of merge trees we introduce a novel metric structure along with an algorithm to compute it, define a framework to compare different functions defined on merge trees and investigate the metric space obtained with the aforementioned metric. Different geometric and topolog- ical properties of the space of merge trees are established, with the aim of obtaining a deeper understanding of such trees. To showcase the effectiveness of the proposed metric, we develop an application in the field of Functional Data Analysis, working with functions up to homeomorphic reparametrization, and in the field of radiomics, where each patient is represented via a clustering dendrogram.
Resumo:
In modern society, security issues of IT Systems are intertwined with interdisciplinary aspects, from social life to sustainability, and threats endanger many aspects of every- one’s daily life. To address the problem, it’s important that the systems that we use guarantee a certain degree of security, but to achieve this, it is necessary to be able to give a measure to the amount of security. Measuring security is not an easy task, but many initiatives, including European regulations, want to make this possible. One method of measuring security is based on the use of security metrics: those are a way of assessing, from various aspects, vulnera- bilities, methods of defense, risks and impacts of successful attacks then also efficacy of reactions, giving precise results using mathematical and statistical techniques. I have done literature research to provide an overview on the meaning, the effects, the problems, the applications and the overall current situation over security metrics, with particular emphasis in giving practical examples. This thesis starts with a summary of the state of the art in the field of security met- rics and application examples to outline the gaps in current literature, the difficulties found in the change of application context, to then advance research questions aimed at fostering the discussion towards the definition of a more complete and applicable view of the subject. Finally, it stresses the lack of security metrics that consider interdisciplinary aspects, giving some potential starting point to develop security metrics that cover all as- pects involved, taking the field to a new level of formal soundness and practical usability.
Resumo:
The present study evaluates the performance of four methods for estimating regression coefficients used to make statistical decisions regarding intervention effectiveness in single-case designs. Ordinary least squares estimation is compared to two correction techniques dealing with general trend and one eliminating autocorrelation whenever it is present. Type I error rates and statistical power are studied for experimental conditions defined by the presence or absence of treatment effect (change in level or in slope), general trend, and serial dependence. The results show that empirical Type I error rates do not approximate the nominal ones in presence of autocorrelation or general trend when ordinary and generalized least squares are applied. The techniques controlling trend show lower false alarm rates, but prove to be insufficiently sensitive to existing treatment effects. Consequently, the use of the statistical significance of the regression coefficients for detecting treatment effects is not recommended for short data series.
Resumo:
Learning Disability (LD) is a general term that describes specific kinds of learning problems. It is a neurological condition that affects a child's brain and impairs his ability to carry out one or many specific tasks. The learning disabled children are neither slow nor mentally retarded. This disorder can make it problematic for a child to learn as quickly or in the same way as some child who isn't affected by a learning disability. An affected child can have normal or above average intelligence. They may have difficulty paying attention, with reading or letter recognition, or with mathematics. It does not mean that children who have learning disabilities are less intelligent. In fact, many children who have learning disabilities are more intelligent than an average child. Learning disabilities vary from child to child. One child with LD may not have the same kind of learning problems as another child with LD. There is no cure for learning disabilities and they are life-long. However, children with LD can be high achievers and can be taught ways to get around the learning disability. In this research work, data mining using machine learning techniques are used to analyze the symptoms of LD, establish interrelationships between them and evaluate the relative importance of these symptoms. To increase the diagnostic accuracy of learning disability prediction, a knowledge based tool based on statistical machine learning or data mining techniques, with high accuracy,according to the knowledge obtained from the clinical information, is proposed. The basic idea of the developed knowledge based tool is to increase the accuracy of the learning disability assessment and reduce the time used for the same. Different statistical machine learning techniques in data mining are used in the study. Identifying the important parameters of LD prediction using the data mining techniques, identifying the hidden relationship between the symptoms of LD and estimating the relative significance of each symptoms of LD are also the parts of the objectives of this research work. The developed tool has many advantages compared to the traditional methods of using check lists in determination of learning disabilities. For improving the performance of various classifiers, we developed some preprocessing methods for the LD prediction system. A new system based on fuzzy and rough set models are also developed for LD prediction. Here also the importance of pre-processing is studied. A Graphical User Interface (GUI) is designed for developing an integrated knowledge based tool for prediction of LD as well as its degree. The designed tool stores the details of the children in the student database and retrieves their LD report as and when required. The present study undoubtedly proves the effectiveness of the tool developed based on various machine learning techniques. It also identifies the important parameters of LD and accurately predicts the learning disability in school age children. This thesis makes several major contributions in technical, general and social areas. The results are found very beneficial to the parents, teachers and the institutions. They are able to diagnose the child’s problem at an early stage and can go for the proper treatments/counseling at the correct time so as to avoid the academic and social losses.
Resumo:
In Statistical Machine Translation from English to Malayalam, an unseen English sentence is translated into its equivalent Malayalam translation using statistical models like translation model, language model and a decoder. A parallel corpus of English-Malayalam is used in the training phase. Word to word alignments has to be set up among the sentence pairs of the source and target language before subjecting them for training. This paper is deals with the techniques which can be adopted for improving the alignment model of SMT. Incorporating the parts of speech information into the bilingual corpus has eliminated many of the insignificant alignments. Also identifying the name entities and cognates present in the sentence pairs has proved to be advantageous while setting up the alignments. Moreover, reduction of the unwanted alignments has brought in better training results. Experiments conducted on a sample corpus have generated reasonably good Malayalam translations and the results are verified with F measure, BLEU and WER evaluation metrics
Resumo:
Resumen tomado de la publicaci??n