906 resultados para Exploratory statistical data analysis
Resumo:
Systematic, high-quality observations of the atmosphere, oceans and terrestrial environments are required to improve understanding of climate characteristics and the consequences of climate change. The overall aim of this report is to carry out a comparative assessment of approaches taken to addressing the state of European observations systems and related data analysis by some leading actors in the field. This research reports on approaches to climate observations and analyses in Ireland, Switzerland, Germany, The Netherlands and Austria and explores options for a more coordinated approach to national responses to climate observations in Europe. The key aspects addressed are: an assessment of approaches to develop GCOS and provision of analysis of GCOS data; an evaluation of how these countries are reporting development of GCOS; highlighting best practice in advancing GCOS implementation including analysis of Essential Climate Variables (ECVs); a comparative summary of the differences and synergies in terms of the reporting of climate observations; an overview of relevant European initiatives and recommendations on how identified gaps might be addressed in the short to medium term.
Resumo:
Energy efficiency and user comfort have recently become priorities in the Facility Management (FM) sector. This has resulted in the use of innovative building components, such as thermal solar panels, heat pumps, etc., as they have potential to provide better performance, energy savings and increased user comfort. However, as the complexity of components increases, the requirement for maintenance management also increases. The standard routine for building maintenance is inspection which results in repairs or replacement when a fault is found. This routine leads to unnecessary inspections which have a cost with respect to downtime of a component and work hours. This research proposes an alternative routine: performing building maintenance at the point in time when the component is degrading and requires maintenance, thus reducing the frequency of unnecessary inspections. This thesis demonstrates that statistical techniques can be used as part of a maintenance management methodology to invoke maintenance before failure occurs. The proposed FM process is presented through a scenario utilising current Building Information Modelling (BIM) technology and innovative contractual and organisational models. This FM scenario supports a Degradation based Maintenance (DbM) scheduling methodology, implemented using two statistical techniques, Particle Filters (PFs) and Gaussian Processes (GPs). DbM consists of extracting and tracking a degradation metric for a component. Limits for the degradation metric are identified based on one of a number of proposed processes. These processes determine the limits based on the maturity of the historical information available. DbM is implemented for three case study components: a heat exchanger; a heat pump; and a set of bearings. The identified degradation points for each case study, from a PF, a GP and a hybrid (PF and GP combined) DbM implementation are assessed against known degradation points. The GP implementations are successful for all components. For the PF implementations, the results presented in this thesis find that the extracted metrics and limits identify degradation occurrences accurately for components which are in continuous operation. For components which have seasonal operational periods, the PF may wrongly identify degradation. The GP performs more robustly than the PF, but the PF, on average, results in fewer false positives. The hybrid implementations, which are a combination of GP and PF results, are successful for 2 of 3 case studies and are not affected by seasonal data. Overall, DbM is effectively applied for the three case study components. The accuracy of the implementations is dependant on the relationships modelled by the PF and GP, and on the type and quantity of data available. This novel maintenance process can improve equipment performance and reduce energy wastage from BSCs operation.
Resumo:
A compositional multivariate approach is used to analyse regional scale soil geochemical data obtained as part of the Tellus Project generated by the Geological Survey Northern Ireland (GSNI). The multi-element total concentration data presented comprise XRF analyses of 6862 rural soil samples collected at 20cm depths on a non-aligned grid at one site per 2 km2. Censored data were imputed using published detection limits. Using these imputed values for 46 elements (including LOI), each soil sample site was assigned to the regional geology map provided by GSNI initially using the dominant lithology for the map polygon. Northern Ireland includes a diversity of geology representing a stratigraphic record from the Mesoproterozoic, up to and including the Palaeogene. However, the advance of ice sheets and their meltwaters over the last 100,000 years has left at least 80% of the bedrock covered by superficial deposits, including glacial till and post-glacial alluvium and peat. The question is to what extent the soil geochemistry reflects the underlying geology or superficial deposits. To address this, the geochemical data were transformed using centered log ratios (clr) to observe the requirements of compositional data analysis and avoid closure issues. Following this, compositional multivariate techniques including compositional Principal Component Analysis (PCA) and minimum/maximum autocorrelation factor (MAF) analysis method were used to determine the influence of underlying geology on the soil geochemistry signature. PCA showed that 72% of the variation was determined by the first four principal components (PC’s) implying “significant” structure in the data. Analysis of variance showed that only 10 PC’s were necessary to classify the soil geochemical data. To consider an improvement over PCA that uses the spatial relationships of the data, a classification based on MAF analysis was undertaken using the first 6 dominant factors. Understanding the relationship between soil geochemistry and superficial deposits is important for environmental monitoring of fragile ecosystems such as peat. To explore whether peat cover could be predicted from the classification, the lithology designation was adapted to include the presence of peat, based on GSNI superficial deposit polygons and linear discriminant analysis (LDA) undertaken. Prediction accuracy for LDA classification improved from 60.98% based on PCA using 10 principal components to 64.73% using MAF based on the 6 most dominant factors. The misclassification of peat may reflect degradation of peat covered areas since the creation of superficial deposit classification. Further work will examine the influence of underlying lithologies on elemental concentrations in peat composition and the effect of this in classification analysis.
Resumo:
Thesis (Ph.D.)--University of Washington, 2016-08
Resumo:
Thesis (Ph.D.)--University of Washington, 2016-08
Resumo:
In recent decades the public sector comes under pressure in order to improve its performance. The use of Information Technology (IT) has been a tool increasingly used in reaching that goal. Thus, it has become an important issue in public organizations, particularly in institutions of higher education, determine which factors influence the acceptance and use of technology, impacting on the success of its implementation and the desired organizational results. The Technology Acceptance Model - TAM was used as the basis for this study and is based on the constructs perceived usefulness and perceived ease of use. However, when it comes to integrated management systems due to the complexity of its implementation,organizational factors were added to thus seek further explanation of the acceptance of such systems. Thus, added to the model five TAM constructs related to critical success factors in implementing ERP systems, they are: support of top management, communication, training, cooperation, and technological complexity (BUENO and SALMERON, 2008). Based on the foregoing, launches the following research problem: What factors influence the acceptance and use of SIE / module academic at the Federal University of Para, from the users' perception of teachers and technicians? The purpose of this study was to identify the influence of organizational factors, and behavioral antecedents of behavioral intention to use the SIE / module academic UFPA in the perspective of teachers and technical users. This is applied research, exploratory and descriptive, quantitative with the implementation of a survey, and data collection occurred through a structured questionnaire applied to a sample of 229 teachers and 30 technical and administrative staff. Data analysis was carried out through descriptive statistics and structural equation modeling with the technique of partial least squares (PLS). Effected primarily to assess the measurement model, which were verified reliability, convergent and discriminant validity for all indicators and constructs. Then the structural model was analyzed using the bootstrap resampling technique like. In assessing statistical significance, all hypotheses were supported. The coefficient of determination (R ²) was high or average in five of the six endogenous variables, so the model explains 47.3% of the variation in behavioral intention. It is noteworthy that among the antecedents of behavioral intention (BI) analyzed in this study, perceived usefulness is the variable that has a greater effect on behavioral intention, followed by ease of use (PEU) and attitude (AT). Among the organizational aspects (critical success factors) studied technological complexity (TC) and training (ERT) were those with greatest effect on behavioral intention to use, although these effects were lower than those produced by behavioral factors (originating from TAM). It is pointed out further that the support of senior management (TMS) showed, among all variables, the least effect on the intention to use (BI) and was followed by communications (COM) and cooperation (CO), which exert a low effect on behavioral intention (BI). Therefore, as other studies on the TAM constructs were adequate for the present research. Thus, the study contributed towards proving evidence that the Technology Acceptance Model can be applied to predict the acceptance of integrated management systems, even in public. Keywords: Technology
Resumo:
Given the need of a growing internationalization of business, to have a good command of English is, most of the times important for the development of technical (specific) competences. It is, thus, critical that professionals use accurate terminology to set grounds for a well-succeeded communication. Furthermore, business communication is increasingly moving to ICT-mediated sets and professionals have to be able to promptly adjust to these needs, resorting to trustworthy online information sources, but also using technologies that better serve their business purposes. In this scenario, the main objective of this study is to find evidence as to the utility of concept mapping as a teaching and learning strategy for the appropriation of business English terminology, enabling students to use English more efficiently as language of communication in business context. This study was based on a case study methodology, mainly of exploratory nature. Participants were students (n= 30) enrolled in the subject English Applied to Management II at Águeda School of Technology and Management – University of Aveiro (2013/14 edition). They were asked to create and peer review two concept maps (cmaps), one individually and another in pairs. The data gathered were treated and analysed resorting qualitative (content analysis) and to quantitative (descriptive statistical analysis) techniques. Results of the data analysis unveil that the use of a collaborative concept mapping tool promotes the development of linguistic competences as to the use of business terminology, but also of communication and collaboration competences. Besides, it was also a very important motivation element in the students’ engagement with the subject content.
Resumo:
The graph Laplacian operator is widely studied in spectral graph theory largely due to its importance in modern data analysis. Recently, the Fourier transform and other time-frequency operators have been defined on graphs using Laplacian eigenvalues and eigenvectors. We extend these results and prove that the translation operator to the i’th node is invertible if and only if all eigenvectors are nonzero on the i’th node. Because of this dependency on the support of eigenvectors we study the characteristic set of Laplacian eigenvectors. We prove that the Fiedler vector of a planar graph cannot vanish on large neighborhoods and then explicitly construct a family of non-planar graphs that do exhibit this property. We then prove original results in modern analysis on graphs. We extend results on spectral graph wavelets to create vertex-dyanamic spectral graph wavelets whose support depends on both scale and translation parameters. We prove that Spielman’s Twice-Ramanujan graph sparsifying algorithm cannot outperform his conjectured optimal sparsification constant. Finally, we present numerical results on graph conditioning, in which edges of a graph are rescaled to best approximate the complete graph and reduce average commute time.
Resumo:
Undoubtedly, statistics has become one of the most important subjects in the modern world, where its applications are ubiquitous. The importance of statistics is not limited to statisticians, but also impacts upon non-statisticians who have to use statistics within their own disciplines. Several studies have indicated that most of the academic departments around the world have realized the importance of statistics to non-specialist students. Therefore, the number of students enrolled in statistics courses has vastly increased, coming from a variety of disciplines. Consequently, research within the scope of statistics education has been able to develop throughout the last few years. One important issue is how statistics is best taught to, and learned by, non-specialist students. This issue is controlled by several factors that affect the learning and teaching of statistics to non-specialist students, such as the use of technology, the role of the English language (especially for those whose first language is not English), the effectiveness of statistics teachers and their approach towards teaching statistics courses, students’ motivation to learn statistics and the relevance of statistics courses to the main subjects of non-specialist students. Several studies, focused on aspects of learning and teaching statistics, have been conducted in different countries around the world, particularly in Western countries. Conversely, the situation in Arab countries, especially in Saudi Arabia, is different; here, there is very little research in this scope, and what there is does not meet the needs of those countries towards the development of learning and teaching statistics to non-specialist students. This research was instituted in order to develop the field of statistics education. The purpose of this mixed methods study was to generate new insights into this subject by investigating how statistics courses are currently taught to non-specialist students in Saudi universities. Hence, this study will contribute towards filling the knowledge gap that exists in Saudi Arabia. This study used multiple data collection approaches, including questionnaire surveys from 1053 non-specialist students who had completed at least one statistics course in different colleges of the universities in Saudi Arabia. These surveys were followed up with qualitative data collected via semi-structured interviews with 16 teachers of statistics from colleges within all six universities where statistics is taught to non-specialist students in Saudi Arabia’s Eastern Region. The data from questionnaires included several types, so different techniques were used in analysis. Descriptive statistics were used to identify the demographic characteristics of the participants. The chi-square test was used to determine associations between variables. Based on the main issues that are raised from literature review, the questions (items scales) were grouped and five key groups of questions were obtained which are: 1) Effectiveness of Teachers; 2) English Language; 3) Relevance of Course; 4) Student Engagement; 5) Using Technology. Exploratory data analysis was used to explore these issues in more detail. Furthermore, with the existence of clustering in the data (students within departments within colleges, within universities), multilevel generalized linear models for dichotomous analysis have been used to clarify the effects of clustering at those levels. Factor analysis was conducted confirming the dimension reduction of variables (items scales). The data from teachers’ interviews were analysed on an individual basis. The responses were assigned to one of the eight themes that emerged from within the data: 1) the lack of students’ motivation to learn statistics; 2) students' participation; 3) students’ assessment; 4) the effective use of technology; 5) the level of previous mathematical and statistical skills of non-specialist students; 6) the English language ability of non-specialist students; 7) the need for extra time for teaching and learning statistics; and 8) the role of administrators. All the data from students and teachers indicated that the situation of learning and teaching statistics to non-specialist students in Saudi universities needs to be improved in order to meet the needs of those students. The findings of this study suggested a weakness in the use of statistical software applications in these courses. This study showed that there is lack of application of technology such as statistical software programs in these courses, which would allow non-specialist students to consolidate their knowledge. The results also indicated that English language is considered one of the main challenges in learning and teaching statistics, particularly in institutions where English is not used as the main language. Moreover, the weakness of mathematical skills of students is considered another major challenge. Additionally, the results indicated that there was a need to tailor statistics courses to the needs of non-specialist students based on their main subjects. The findings indicate that statistics teachers need to choose appropriate methods when teaching statistics courses.
Resumo:
Background: Understanding transcriptional regulation by genome-wide microarray studies can contribute to unravel complex relationships between genes. Attempts to standardize the annotation of microarray data include the Minimum Information About a Microarray Experiment (MIAME) recommendations, the MAGE-ML format for data interchange, and the use of controlled vocabularies or ontologies. The existing software systems for microarray data analysis implement the mentioned standards only partially and are often hard to use and extend. Integration of genomic annotation data and other sources of external knowledge using open standards is therefore a key requirement for future integrated analysis systems. Results: The EMMA 2 software has been designed to resolve shortcomings with respect to full MAGE-ML and ontology support and makes use of modern data integration techniques. We present a software system that features comprehensive data analysis functions for spotted arrays, and for the most common synthesized oligo arrays such as Agilent, Affymetrix and NimbleGen. The system is based on the full MAGE object model. Analysis functionality is based on R and Bioconductor packages and can make use of a compute cluster for distributed services. Conclusion: Our model-driven approach for automatically implementing a full MAGE object model provides high flexibility and compatibility. Data integration via SOAP-based web-services is advantageous in a distributed client-server environment as the collaborative analysis of microarray data is gaining more and more relevance in international research consortia. The adequacy of the EMMA 2 software design and implementation has been proven by its application in many distributed functional genomics projects. Its scalability makes the current architecture suited for extensions towards future transcriptomics methods based on high-throughput sequencing approaches which have much higher computational requirements than microarrays.
Resumo:
In recent decades the public sector comes under pressure in order to improve its performance. The use of Information Technology (IT) has been a tool increasingly used in reaching that goal. Thus, it has become an important issue in public organizations, particularly in institutions of higher education, determine which factors influence the acceptance and use of technology, impacting on the success of its implementation and the desired organizational results. The Technology Acceptance Model - TAM was used as the basis for this study and is based on the constructs perceived usefulness and perceived ease of use. However, when it comes to integrated management systems due to the complexity of its implementation,organizational factors were added to thus seek further explanation of the acceptance of such systems. Thus, added to the model five TAM constructs related to critical success factors in implementing ERP systems, they are: support of top management, communication, training, cooperation, and technological complexity (BUENO and SALMERON, 2008). Based on the foregoing, launches the following research problem: What factors influence the acceptance and use of SIE / module academic at the Federal University of Para, from the users' perception of teachers and technicians? The purpose of this study was to identify the influence of organizational factors, and behavioral antecedents of behavioral intention to use the SIE / module academic UFPA in the perspective of teachers and technical users. This is applied research, exploratory and descriptive, quantitative with the implementation of a survey, and data collection occurred through a structured questionnaire applied to a sample of 229 teachers and 30 technical and administrative staff. Data analysis was carried out through descriptive statistics and structural equation modeling with the technique of partial least squares (PLS). Effected primarily to assess the measurement model, which were verified reliability, convergent and discriminant validity for all indicators and constructs. Then the structural model was analyzed using the bootstrap resampling technique like. In assessing statistical significance, all hypotheses were supported. The coefficient of determination (R ²) was high or average in five of the six endogenous variables, so the model explains 47.3% of the variation in behavioral intention. It is noteworthy that among the antecedents of behavioral intention (BI) analyzed in this study, perceived usefulness is the variable that has a greater effect on behavioral intention, followed by ease of use (PEU) and attitude (AT). Among the organizational aspects (critical success factors) studied technological complexity (TC) and training (ERT) were those with greatest effect on behavioral intention to use, although these effects were lower than those produced by behavioral factors (originating from TAM). It is pointed out further that the support of senior management (TMS) showed, among all variables, the least effect on the intention to use (BI) and was followed by communications (COM) and cooperation (CO), which exert a low effect on behavioral intention (BI). Therefore, as other studies on the TAM constructs were adequate for the present research. Thus, the study contributed towards proving evidence that the Technology Acceptance Model can be applied to predict the acceptance of integrated management systems, even in public. Keywords: Technology
Resumo:
Sequences of timestamped events are currently being generated across nearly every domain of data analytics, from e-commerce web logging to electronic health records used by doctors and medical researchers. Every day, this data type is reviewed by humans who apply statistical tests, hoping to learn everything they can about how these processes work, why they break, and how they can be improved upon. To further uncover how these processes work the way they do, researchers often compare two groups, or cohorts, of event sequences to find the differences and similarities between outcomes and processes. With temporal event sequence data, this task is complex because of the variety of ways single events and sequences of events can differ between the two cohorts of records: the structure of the event sequences (e.g., event order, co-occurring events, or frequencies of events), the attributes about the events and records (e.g., gender of a patient), or metrics about the timestamps themselves (e.g., duration of an event). Running statistical tests to cover all these cases and determining which results are significant becomes cumbersome. Current visual analytics tools for comparing groups of event sequences emphasize a purely statistical or purely visual approach for comparison. Visual analytics tools leverage humans' ability to easily see patterns and anomalies that they were not expecting, but is limited by uncertainty in findings. Statistical tools emphasize finding significant differences in the data, but often requires researchers have a concrete question and doesn't facilitate more general exploration of the data. Combining visual analytics tools with statistical methods leverages the benefits of both approaches for quicker and easier insight discovery. Integrating statistics into a visualization tool presents many challenges on the frontend (e.g., displaying the results of many different metrics concisely) and in the backend (e.g., scalability challenges with running various metrics on multi-dimensional data at once). I begin by exploring the problem of comparing cohorts of event sequences and understanding the questions that analysts commonly ask in this task. From there, I demonstrate that combining automated statistics with an interactive user interface amplifies the benefits of both types of tools, thereby enabling analysts to conduct quicker and easier data exploration, hypothesis generation, and insight discovery. The direct contributions of this dissertation are: (1) a taxonomy of metrics for comparing cohorts of temporal event sequences, (2) a statistical framework for exploratory data analysis with a method I refer to as high-volume hypothesis testing (HVHT), (3) a family of visualizations and guidelines for interaction techniques that are useful for understanding and parsing the results, and (4) a user study, five long-term case studies, and five short-term case studies which demonstrate the utility and impact of these methods in various domains: four in the medical domain, one in web log analysis, two in education, and one each in social networks, sports analytics, and security. My dissertation contributes an understanding of how cohorts of temporal event sequences are commonly compared and the difficulties associated with applying and parsing the results of these metrics. It also contributes a set of visualizations, algorithms, and design guidelines for balancing automated statistics with user-driven analysis to guide users to significant, distinguishing features between cohorts. This work opens avenues for future research in comparing two or more groups of temporal event sequences, opening traditional machine learning and data mining techniques to user interaction, and extending the principles found in this dissertation to data types beyond temporal event sequences.
Resumo:
Animal welfare issues have received much attention not only to supply farmed animal requirements, but also to ethical and cultural public concerns. Daily collected information, as well as the systematic follow-up of production stages, produces important statistical data for production assessment and control, as well as for improvement possibilities. In this scenario, this research study analyzed behavioral, production, and environmental data using Main Component Multivariable Analysis, which correlated observed behaviors, recorded using video cameras and electronic identification, with performance parameters of female broiler breeders. The aim was to start building a system to support decision-making in broiler breeder housing, based on bird behavioral parameters. Birds were housed in an environmental chamber, with three pens with different controlled environments. Bird sensitivity to environmental conditions were indicated by their behaviors, stressing the importance of behavioral observations for modern poultry management. A strong association between performance parameters and the behavior at the nest, suggesting that this behavior may be used to predict productivity. The behaviors of ruffling feathers, opening wings, preening, and at the drinker were negatively correlated with environmental temperature, suggesting that the increase of in the frequency of these behaviors indicate improvement of thermal welfare.
Resumo:
Dissertação de Mestrado, Gestão Empresarial, Faculdade de Economia, Universidade do Algarve, 2015
Resumo:
A investigação na área da saúde e a utilização dos seus resultados tem funcionado como base para a melhoria da qualidade de cuidados, exigindo dos profissionais de saúde conhecimentos na área específica onde desempenham funções, conhecimentos em metodologia de investigação que incluam as técnicas de observação, técnicas de recolha e análise de dados, para mais facilmente serem leitores capacitados dos resultados da investigação. Os profissionais de saúde são observadores privilegiados das respostas humanas à saúde e à doença, podendo contribuir para o desenvolvimento e bem-estar dos indivíduos muitas vezes em situações de grande vulnerabilidade. Em saúde infantil e pediatria o enfoque está nos cuidados centrados na família privilegiando-se o desenvolvimento harmonioso da criança e jovem, valorizando os resultados mensuráveis em saúde que permitam determinar a eficácia das intervenções e a qualidade de saúde e de vida. No contexto pediátrico realçamos as práticas baseadas na evidência, a importância atribuída à pesquisa e à aplicação dos resultados da investigação nas práticas clínicas, assim como o desenvolvimento de instrumentos de mensuração padronizados, nomeadamente as escalas de avaliação, de ampla utilização clínica, que facilitam a apreciação e avaliação do desenvolvimento e da saúde das crianças e jovens e resultem em ganhos em saúde. A observação de forma sistematizada das populações neonatais e pediátricas com escalas de avaliação tem vindo a aumentar, o que tem permitido um maior equilíbrio na avaliação das crianças e também uma observação baseada na teoria e nos resultados da investigação. Alguns destes aspetos serviram de base ao desenvolvimento deste trabalho que pretende dar resposta a 3 objetivos fundamentais. Para dar resposta ao primeiro objetivo, “Identificar na literatura científica, os testes estatísticos mais frequentemente utilizados pelos investigadores da área da saúde infantil e pediatria quando usam escalas de avaliação” foi feita uma revisão sistemática da literatura, que tinha como objetivo analisar artigos científicos cujos instrumentos de recolha de dados fossem escalas de avaliação, na área da saúde da criança e jovem, desenvolvidas com variáveis ordinais, e identificar os testes estatísticos aplicados com estas variáveis. A análise exploratória dos artigos permitiu-nos verificar que os investigadores utilizam diferentes instrumentos com diferentes formatos de medida ordinal (com 3, 4, 5, 7, 10 pontos) e tanto aplicam testes paramétricos como não paramétricos, ou os dois em simultâneo, com este tipo de variáveis, seja qual for a dimensão da amostra. A descrição da metodologia nem sempre explicita se são cumpridas as assunções dos testes. Os artigos consultados nem sempre fazem referência à distribuição de frequência das variáveis (simetria/assimetria) nem à magnitude das correlações entre os itens. A leitura desta bibliografia serviu de suporte à elaboração de dois artigos, um de revisão sistemática da literatura e outro de reflexão teórica. Apesar de terem sido encontradas algumas respostas às dúvidas com que os investigadores e os profissionais, que trabalham com estes instrumentos, se deparam, verifica-se a necessidade de desenvolver estudos de simulação que confirmem algumas situações reais e alguma teoria já existente, e trabalhem outros aspetos nos quais se possam enquadrar os cenários reais de forma a facilitar a tomada de decisão dos investigadores e clínicos que utilizam escalas de avaliação. Para dar resposta ao segundo objetivo “Comparar a performance, em termos de potência e probabilidade de erro de tipo I, das 4 estatísticas da MANOVA paramétrica com 2 estatísticas da MANOVA não paramétrica quando se utilizam variáveis ordinais correlacionadas, geradas aleatoriamente”, desenvolvemos um estudo de simulação, através do Método de Monte Carlo, efetuado no Software R. O delineamento do estudo de simulação incluiu um vetor com 3 variáveis dependentes, uma variável independente (fator com três grupos), escalas de avaliação com um formato de medida com 3, 4, 5, e 7 pontos, diferentes probabilidades marginais (p1 para distribuição simétrica, p2 para distribuição assimétrica positiva, p3 para distribuição assimétrica negativa e p4 para distribuição uniforme) em cada um dos três grupos, correlações de baixa, média e elevada magnitude (r=0.10, r=0.40, r=0.70, respetivamente), e seis dimensões de amostras (n=30, 60, 90, 120, 240, 300). A análise dos resultados permitiu dizer que a maior raiz de Roy foi a estatística que apresentou estimativas de probabilidade de erro de tipo I e de potência de teste mais elevadas. A potência dos testes apresenta comportamentos diferentes, dependendo da distribuição de frequência da resposta aos itens, da magnitude das correlações entre itens, da dimensão da amostra e do formato de medida da escala. Tendo por base a distribuição de frequência, considerámos três situações distintas: a primeira (com probabilidades marginais p1,p1,p4 e p4,p4,p1) em que as estimativas da potência eram muito baixas, nos diferentes cenários; a segunda situação (com probabilidades marginais p2,p3,p4; p1,p2,p3 e p2,p2,p3) em que a magnitude das potências é elevada, nas amostras com dimensão superior ou igual a 60 observações e nas escalas com 3, 4,5 pontos e potências de magnitude menos elevada nas escalas com 7 pontos, mas com a mesma ma magnitude nas amostras com dimensão igual a 120 observações, seja qual for o cenário; a terceira situação (com probabilidades marginais p1,p1,p2; p1,p2,p4; p2,p2,p1; p4,p4,p2 e p2,p2,p4) em que quanto maiores, a intensidade das correlações entre itens e o número de pontos da escala, e menor a dimensão das amostras, menor a potência dos testes, sendo o lambda de Wilks aplicado às ordens mais potente do que todas as outra s estatísticas da MANOVA, com valores imediatamente a seguir à maior raiz de Roy. No entanto, a magnitude das potências dos testes paramétricos e não paramétricos assemelha-se nas amostras com dimensão superior a 90 observações (com correlações de baixa e média magnitude), entre as variáveis dependentes nas escalas com 3, 4 e 5 pontos; e superiores a 240 observações, para correlações de baixa intensidade, nas escalas com 7 pontos. No estudo de simulação e tendo por base a distribuição de frequência, concluímos que na primeira situação de simulação e para os diferentes cenários, as potências são de baixa magnitude devido ao facto de a MANOVA não detetar diferenças entre grupos pela sua similaridade. Na segunda situação de simulação e para os diferentes cenários, a magnitude das potências é elevada em todos os cenários cuja dimensão da amostra seja superior a 60 observações, pelo que é possível aplicar testes paramétricos. Na terceira situação de simulação, e para os diferentes cenários quanto menor a dimensão da amostra e mais elevada a intensidade das correlações e o número de pontos da escala, menor a potência dos testes, sendo a magnitude das potências mais elevadas no teste de Wilks aplicado às ordens, seguido do traço de Pillai aplicado às ordens. No entanto, a magnitude das potências dos testes paramétricos e não paramétricos assemelha-se nas amostras com maior dimensão e correlações de baixa e média magnitude. Para dar resposta ao terceiro objetivo “Enquadrar os resultados da aplicação da MANOVA paramétrica e da MANOVA não paramétrica a dados reais provenientes de escalas de avaliação com um formato de medida com 3, 4, 5 e 7 pontos, nos resultados do estudo de simulação estatística” utilizaram-se dados reais que emergiram da observação de recém-nascidos com a escala de avaliação das competências para a alimentação oral, Early Feeding Skills (EFS), o risco de lesões da pele, com a Neonatal Skin Risk Assessment Scale (NSRAS), e a avaliação da independência funcional em crianças e jovens com espinha bífida, com a Functional Independence Measure (FIM). Para fazer a análise destas escalas foram realizadas 4 aplicações práticas que se enquadrassem nos cenários do estudo de simulação. A idade, o peso, e o nível de lesão medular foram as variáveis independentes escolhidas para selecionar os grupos, sendo os recém-nascidos agrupados por “classes de idade gestacional” e por “classes de peso” as crianças e jovens com espinha bífida por “classes etárias” e “níveis de lesão medular”. Verificou-se um bom enquadramento dos resultados com dados reais no estudo de simulação.