20 resultados para Statistical testing
em Instituto Politécnico do Porto, Portugal
Resumo:
This paper addresses the challenging task of computing multiple roots of a system of nonlinear equations. A repulsion algorithm that invokes the Nelder-Mead (N-M) local search method and uses a penalty-type merit function based on the error function, known as 'erf', is presented. In the N-M algorithm context, different strategies are proposed to enhance the quality of the solutions and improve the overall efficiency. The main goal of this paper is to use a two-level factorial design of experiments to analyze the statistical significance of the observed differences in selected performance criteria produced when testing different strategies in the N-M based repulsion algorithm. The main goal of this paper is to use a two-level factorial design of experiments to analyze the statistical significance of the observed differences in selected performance criteria produced when testing different strategies in the N-M based repulsion algorithm.
Resumo:
Mestrado em Engenharia Electrotécnica e de Computadores
Resumo:
GOAL: The manufacturing and distribution of strips of instant thin - layer chromatography with silica gel (ITLC - SG) (reference method) is currently discontinued so there is a need for an alternative method f or the determination of radiochemical purity (RCP) of 99m Tc - tetrofosmin. This study aims to compare five alternative methods proposed by the producer to determine the RCP of 99m Tc - tetrofosmin. METHODS: Nineteen vials of tetrofosmin were radiolabelled with 99m Tc and the percentages of the RCP were determined. Five different methods were compared with the standard RCP testing method (ITLC - SG, 2x20 cm): Whatman 3MM (1x10 cm) with acetone and dichloro - methane (method 1); Whatman 3MM (1x1 0 cm) with ethyl acetate (method 2); aluminum oxide - coated plastic thin - layer chromatography (TLC) plate (1x10 cm) and ethanol (method 3); Whatman 3MM (2x20 cm) with acetone and dichloro - methane (method 4); solid - phase extraction method C18 cartridge (meth od 5). RESULTS: The average values of RCP were 95,30% ± 1,28% (method 1), 93,95 ± 0,61% (method 2), 96,85% ± 0,93% (method 3), 92,94% ± 0,99% (method 4) and 96,25% ± 2,57% (method 5) (n=12 each), and 93,15% ± 1,13% for the standard method (n=19). There we re statistical significant differences in the values obtained for methods 1 (P=0,001), 3 (P=0,000) and 5 (P=0,004), and there were no statistical significant differences in the values obtained for methods 2 (P=0,113) and 4 (P=0,327). CONCLUSION: From the results obtained, methods 2 and 4 showed a higher correlation with the standard method. Unlike method 4, method 2 is less time - consuming than the reference method and can overcome the problems associated with the solvent toxicity. The remaining methods (1, 3 and 5) tended to overestimate RCP value compared to the standard method.
Resumo:
O objectivo deste projecto consiste em analisar o potencial eólico em ambiente edificado urbano, considerando a utilização de turbinas eólicas de eixo vertical para produção de energia nesse contexto. Pretende-se com este documento demonstrar que, embora os estudos sobre as turbinas de eixo vertical sejam ainda reduzidos quando comparados aos das de eixo horizontal, tal não implica que as mesmas não tenham características que, em determinados cenários, sejam superiores às turbinas de eixo horizontal. Para a análise da intensidade de vento em cenário edificado urbano, seleccionou-se como local de estudo desta tese o Instituto Superior de Engenharia do Porto (ISEP), mais concretamente, o edifício F e o edifício E. Foi escolhido o edifício F, pelo facto de a acessibilidade ao mesmo ser mais fácil e também pelo facto de nesse edifício se ter acesso à parte norte do mesmo, onde os ventos são de intensidade mais forte. O edifício E como já tinha um anemómetro colocado a recolher dados para a estação meteorológica do ISEP foi igualmente objecto de incorporação na tese e utilizado na avaliação geoestatística exemplificativa. Após a extensa recolha de dados nos locais anteriormente mencionados, procedeu-se à análise de diversas turbinas de eixo vertical em termos dos respectivos perfis de produção. De seguida, efectuou-se uma análise estatística e geoestatística de carácter exemplificativo, de modo a caracterizar a intensidade de vento presente na área compreendida entre o edifício E e o edifício F. De forma a finalizar o documento, é apresentada uma conclusão relativa ao potencial eólico para produção de energia eléctrica em ambiente edificado urbano por recurso a turbinas eólicas de eixo vertical.
Resumo:
The mechanisms of speech production are complex and have been raising attention from researchers of both medical and computer vision fields. In the speech production mechanism, the articulator’s study is a complex issue, since they have a high level of freedom along this process, namely the tongue, which instigates a problem in its control and observation. In this work it is automatically characterized the tongues shape during the articulation of the oral vowels of Portuguese European by using statistical modeling on MR-images. A point distribution model is built from a set of images collected during artificially sustained articulations of Portuguese European sounds, which can extract the main characteristics of the motion of the tongue. The model built in this work allows under standing more clearly the dynamic speech events involved during sustained articulations. The tongue shape model built can also be useful for speech rehabilitation purposes, specifically to recognize the compensatory movements of the articulators during speech production.
Resumo:
Background: Temporal lobe epilepsy (TLE) is a neurological disorder that directly affects cortical areas responsible for auditory processing. The resulting abnormalities can be assessed using event-related potentials (ERP), which have high temporal resolution. However, little is known about TLE in terms of dysfunction of early sensory memory encoding or possible correlations between EEGs, linguistic deficits, and seizures. Mismatch negativity (MMN) is an ERP component – elicited by introducing a deviant stimulus while the subject is attending to a repetitive behavioural task – which reflects pre-attentive sensory memory function and reflects neuronal auditory discrimination and perceptional accuracy. Hypothesis: We propose an MMN protocol for future clinical application and research based on the hypothesis that children with TLE may have abnormal MMN for speech and non-speech stimuli. The MMN can be elicited with a passive auditory oddball paradigm, and the abnormalities might be associated with the location and frequency of epileptic seizures. Significance: The suggested protocol might contribute to a better understanding of the neuropsychophysiological basis of MMN. We suggest that in TLE central sound representation may be decreased for speech and non-speech stimuli. Discussion: MMN arises from a difference to speech and non-speech stimuli across electrode sites. TLE in childhood might be a good model for studying topographic and functional auditory processing and its neurodevelopment, pointing to MMN as a possible clinical tool for prognosis, evaluation, follow-up, and rehabilitation for TLE.
Resumo:
Introdução: A capacidade auditiva dos doentes com neoplasias de cabeça e pescoço e tumores cerebrais pode ser comprometida com os tratamentos antineoplásicos realizados. A Quimioterapia com cisplatina pode provocar perda auditiva de condução ou neurossensorial, podendo agravar-se quando combinada com Radioterapia (RT). O objectivo deste trabalho foi a análise da relação entre a Terapia Combinada (Cisplatina+RT) e a Radioterapia isolada, e os seus efeitos adversos sobre a audição tendo em consideração a inclusão das estruturas do ouvido no campo de tratamento de RT. Métodos: Foram seguidos 10 doentes submetidos a Terapia Combinada (grupo TC) e 11 a Radioterapia isolada (grupo RT). A avaliação audiológica realizou-se antes do inicio (M1), no fim (M2) e um mês após (M3) o termo dos tratamentos e incluiu anamnese audiológica, otoscopia e audiometria tonal. Resultados: No grupo TC, 94,4% dos doentes apresentaram uma relação directamente proporcional entre a dose de radiação na cóclea e a perda auditiva. Esta relação só se verificou em 31% dos doentes do grupo RT, tendo-se verificado diferenças significativas entre grupos (p <0,001). Conclusões: Verificou-se maior incidência da perda auditiva no grupo TC relativamente ao grupo RT. Sugere-se um melhor planeamento do tratamento de RT, reduz - indo a dose à cóclea com o objectivo de minimizar a perda auditiva neurossensorial irreversível, sobretudo quando são utilizadas as duas modalidades de tratamento.
Resumo:
A novel enzymatic biosensor for carbamate pesticides detection was developed through the direct immobilization of Trametes versicolor laccase on graphene doped carbon paste electrode functionalized with Prussianblue films (LACC/PB/GPE). Graphene was prepared by graphite sonication-assisted exfoliation and characterized by transmission electron microscopy and X-ray photoelectron spectro- scopy. The Prussian blue film electrodeposited onto graphene doped carbon paste electrode allowed considerable reduction of the charge transfer resistance and of the capacitance of the device.The combined effects of pH, enzyme concentration and incubation time on biosensor response were optimized using a 23 full-factorial statistical design and response surface methodology. Based on the inhibition of laccase activity and using 4-aminophenol as redox mediator at pH 5.0,LACC/PB/GPE exhibited suitable characteristics in terms of sensitivity, intra-and inter-day repeatability (1.8–3.8% RSD), reproducibility (4.1 and 6.3%RSD),selectivity(13.2% bias at the higher interference: substrate ratios tested),accuracy and stability(ca. twenty days)for quantification of five carbamates widely applied on tomato and potato crops.The attained detection limits ranged between 5.2×10−9 mol L−1(0.002 mg kg−1 w/w for ziram)and 1.0×10−7 mol L−1 (0.022 mg kg−1 w/w for carbofuran).Recovery values for the two tested spiking levels ranged from 90.2±0.1%(carbofuran)to 101.1±0.3% (ziram) for tomato and from 91.0±0.1%(formetanate)to 100.8±0.1%(ziram)for potato samples.The proposed methodology is appropriate to enable testing pesticide levels in food samples to fit with regulations and food inspections.
Resumo:
Modern real-time systems, with a more flexible and adaptive nature, demand approaches for timeliness evaluation based on probabilistic measures of meeting deadlines. In this context, simulation can emerge as an adequate solution to understand and analyze the timing behaviour of actual systems. However, care must be taken with the obtained outputs under the penalty of obtaining results with lack of credibility. Particularly important is to consider that we are more interested in values from the tail of a probability distribution (near worst-case probabilities), instead of deriving confidence on mean values. We approach this subject by considering the random nature of simulation output data. We will start by discussing well known approaches for estimating distributions out of simulation output, and the confidence which can be applied to its mean values. This is the basis for a discussion on the applicability of such approaches to derive confidence on the tail of distributions, where the worst-case is expected to be.
Resumo:
A number of characteristics are boosting the eagerness of extending Ethernet to also cover factory-floor distributed real-time applications. Full-duplex links, non-blocking and priority-based switching, bandwidth availability, just to mention a few, are characteristics upon which that eagerness is building up. But, will Ethernet technologies really manage to replace traditional Fieldbus networks? Ethernet technology, by itself, does not include features above the lower layers of the OSI communication model. In the past few years, it is particularly significant the considerable amount of work that has been devoted to the timing analysis of Ethernet-based technologies. It happens, however, that the majority of those works are restricted to the analysis of sub-sets of the overall computing and communication system, thus without addressing timeliness at a holistic level. To this end, we are addressing a few inter-linked research topics with the purpose of setting a framework for the development of tools suitable to extract temporal properties of Commercial-Off-The-Shelf (COTS) Ethernet-based factory-floor distributed systems. This framework is being applied to a specific COTS technology, Ethernet/IP. In this paper, we reason about the modelling and simulation of Ethernet/IP-based systems, and on the use of statistical analysis techniques to provide usable results. Discrete event simulation models of a distributed system can be a powerful tool for the timeliness evaluation of the overall system, but particular care must be taken with the results provided by traditional statistical analysis techniques.
Resumo:
Para obtenção do grau de Doutor pela Universidade de Vigo com menção internacional Departamento de Informática
Resumo:
Não existe uma definição única de processo de memória de longo prazo. Esse processo é geralmente definido como uma série que possui um correlograma decaindo lentamente ou um espectro infinito de frequência zero. Também se refere que uma série com tal propriedade é caracterizada pela dependência a longo prazo e por não periódicos ciclos longos, ou que essa característica descreve a estrutura de correlação de uma série de longos desfasamentos ou que é convencionalmente expressa em termos do declínio da lei-potência da função auto-covariância. O interesse crescente da investigação internacional no aprofundamento do tema é justificado pela procura de um melhor entendimento da natureza dinâmica das séries temporais dos preços dos ativos financeiros. Em primeiro lugar, a falta de consistência entre os resultados reclama novos estudos e a utilização de várias metodologias complementares. Em segundo lugar, a confirmação de processos de memória longa tem implicações relevantes ao nível da (1) modelação teórica e econométrica (i.e., dos modelos martingale de preços e das regras técnicas de negociação), (2) dos testes estatísticos aos modelos de equilíbrio e avaliação, (3) das decisões ótimas de consumo / poupança e de portefólio e (4) da medição de eficiência e racionalidade. Em terceiro lugar, ainda permanecem questões científicas empíricas sobre a identificação do modelo geral teórico de mercado mais adequado para modelar a difusão das séries. Em quarto lugar, aos reguladores e gestores de risco importa saber se existem mercados persistentes e, por isso, ineficientes, que, portanto, possam produzir retornos anormais. O objetivo do trabalho de investigação da dissertação é duplo. Por um lado, pretende proporcionar conhecimento adicional para o debate da memória de longo prazo, debruçando-se sobre o comportamento das séries diárias de retornos dos principais índices acionistas da EURONEXT. Por outro lado, pretende contribuir para o aperfeiçoamento do capital asset pricing model CAPM, considerando uma medida de risco alternativa capaz de ultrapassar os constrangimentos da hipótese de mercado eficiente EMH na presença de séries financeiras com processos sem incrementos independentes e identicamente distribuídos (i.i.d.). O estudo empírico indica a possibilidade de utilização alternativa das obrigações do tesouro (OT’s) com maturidade de longo prazo no cálculo dos retornos do mercado, dado que o seu comportamento nos mercados de dívida soberana reflete a confiança dos investidores nas condições financeiras dos Estados e mede a forma como avaliam as respetiva economias com base no desempenho da generalidade dos seus ativos. Embora o modelo de difusão de preços definido pelo movimento Browniano geométrico gBm alegue proporcionar um bom ajustamento das séries temporais financeiras, os seus pressupostos de normalidade, estacionariedade e independência das inovações residuais são adulterados pelos dados empíricos analisados. Por isso, na procura de evidências sobre a propriedade de memória longa nos mercados recorre-se à rescaled-range analysis R/S e à detrended fluctuation analysis DFA, sob abordagem do movimento Browniano fracionário fBm, para estimar o expoente Hurst H em relação às séries de dados completas e para calcular o expoente Hurst “local” H t em janelas móveis. Complementarmente, são realizados testes estatísticos de hipóteses através do rescaled-range tests R/S , do modified rescaled-range test M - R/S e do fractional differencing test GPH. Em termos de uma conclusão única a partir de todos os métodos sobre a natureza da dependência para o mercado acionista em geral, os resultados empíricos são inconclusivos. Isso quer dizer que o grau de memória de longo prazo e, assim, qualquer classificação, depende de cada mercado particular. No entanto, os resultados gerais maioritariamente positivos suportam a presença de memória longa, sob a forma de persistência, nos retornos acionistas da Bélgica, Holanda e Portugal. Isto sugere que estes mercados estão mais sujeitos a maior previsibilidade (“efeito José”), mas também a tendências que podem ser inesperadamente interrompidas por descontinuidades (“efeito Noé”), e, por isso, tendem a ser mais arriscados para negociar. Apesar da evidência de dinâmica fractal ter suporte estatístico fraco, em sintonia com a maior parte dos estudos internacionais, refuta a hipótese de passeio aleatório com incrementos i.i.d., que é a base da EMH na sua forma fraca. Atendendo a isso, propõem-se contributos para aperfeiçoamento do CAPM, através da proposta de uma nova fractal capital market line FCML e de uma nova fractal security market line FSML. A nova proposta sugere que o elemento de risco (para o mercado e para um ativo) seja dado pelo expoente H de Hurst para desfasamentos de longo prazo dos retornos acionistas. O expoente H mede o grau de memória de longo prazo nos índices acionistas, quer quando as séries de retornos seguem um processo i.i.d. não correlacionado, descrito pelo gBm(em que H = 0,5 , confirmando- se a EMH e adequando-se o CAPM), quer quando seguem um processo com dependência estatística, descrito pelo fBm(em que H é diferente de 0,5, rejeitando-se a EMH e desadequando-se o CAPM). A vantagem da FCML e da FSML é que a medida de memória de longo prazo, definida por H, é a referência adequada para traduzir o risco em modelos que possam ser aplicados a séries de dados que sigam processos i.i.d. e processos com dependência não linear. Então, estas formulações contemplam a EMH como um caso particular possível.
Resumo:
The hidden-node problem has been shown to be a major source of Quality-of-Service (QoS) degradation in Wireless Sensor Networks (WSNs) due to factors such as the limited communication range of sensor nodes, link asymmetry and the characteristics of the physical environment. In wireless contention-based Medium Access Control protocols, if two nodes that are not visible to each other transmit to a third node that is visible to the formers, there will be a collision – usually called hidden-node or blind collision. This problem greatly affects network throughput, energy-efficiency and message transfer delays, which might be particularly dramatic in large-scale WSNs. This technical report tackles the hidden-node problem in WSNs and proposes HNAMe, a simple yet efficient distributed mechanism to overcome it. H-NAMe relies on a grouping strategy that splits each cluster of a WSN into disjoint groups of non-hidden nodes and then scales to multiple clusters via a cluster grouping strategy that guarantees no transmission interference between overlapping clusters. We also show that the H-NAMe mechanism can be easily applied to the IEEE 802.15.4/ZigBee protocols with only minor add-ons and ensuring backward compatibility with the standard specifications. We demonstrate the feasibility of H-NAMe via an experimental test-bed, showing that it increases network throughput and transmission success probability up to twice the values obtained without H-NAMe. We believe that the results in this technical report will be quite useful in efficiently enabling IEEE 802.15.4/ZigBee as a WSN protocol.
Resumo:
In this work, an experimental study was performed on the influence of plug-filling, loading rate and temperature on the tensile strength of single-strap (SS) and double-strap (DS) repairs on aluminium structures. Whilst the main purpose of this work was to evaluate the feasibility of plug-filling for the strength improvement of these repairs, a parallel study was carried out to assess the sensitivity of the adhesive to external features that can affect the repairs performance, such as the rate of loading and environmental temperature. The experimental programme included repairs with different values of overlap length (L O = 10, 20 and 30 mm), and with and without plug-filling, whose results were interpreted in light of experimental evidence of the fracture modes and typical stress distributions for bonded repairs. The influence of the testing speed on the repairs strength was also addressed (considering 0.5, 5 and 25 mm/min). Accounting for the temperature effects, tests were carried out at room temperature (≈23°C), 50 and 80°C. This permitted a comparative evaluation of the adhesive tested below and above the glass transition temperature (T g), established by the manufacturer as 67°C. The combined influence of these two parameters on the repairs strength was also analysed. According to the results obtained from this work, design guidelines for repairing aluminium structures were
Resumo:
Beyond the classical statistical approaches (determination of basic statistics, regression analysis, ANOVA, etc.) a new set of applications of different statistical techniques has increasingly gained relevance in the analysis, processing and interpretation of data concerning the characteristics of forest soils. This is possible to be seen in some of the recent publications in the context of Multivariate Statistics. These new methods require additional care that is not always included or refered in some approaches. In the particular case of geostatistical data applications it is necessary, besides to geo-reference all the data acquisition, to collect the samples in regular grids and in sufficient quantity so that the variograms can reflect the spatial distribution of soil properties in a representative manner. In the case of the great majority of Multivariate Statistics techniques (Principal Component Analysis, Correspondence Analysis, Cluster Analysis, etc.) despite the fact they do not require in most cases the assumption of normal distribution, they however need a proper and rigorous strategy for its utilization. In this work, some reflections about these methodologies and, in particular, about the main constraints that often occur during the information collecting process and about the various linking possibilities of these different techniques will be presented. At the end, illustrations of some particular cases of the applications of these statistical methods will also be presented.