953 resultados para statistical narrow band model
Resumo:
This paper presents a methodology for distribution networks reconfiguration in outage presence in order to choose the reconfiguration that presents the lower power losses. The methodology is based on statistical failure and repair data of the distribution power system components and uses fuzzy-probabilistic modelling for system component outage parameters. Fuzzy membership functions of system component outage parameters are obtained by statistical records. A hybrid method of fuzzy set and Monte Carlo simulation based on the fuzzy-probabilistic models allows catching both randomness and fuzziness of component outage parameters. Once obtained the system states by Monte Carlo simulation, a logical programming algorithm is applied to get all possible reconfigurations for every system state. In order to evaluate the line flows and bus voltages and to identify if there is any overloading, and/or voltage violation a distribution power flow has been applied to select the feasible reconfiguration with lower power losses. To illustrate the application of the proposed methodology to a practical case, the paper includes a case study that considers a real distribution network.
Fuzzy Monte Carlo mathematical model for load curtailment minimization in transmission power systems
Resumo:
This paper presents a methodology which is based on statistical failure and repair data of the transmission power system components and uses fuzzyprobabilistic modeling for system component outage parameters. Using statistical records allows developing the fuzzy membership functions of system component outage parameters. The proposed hybrid method of fuzzy set and Monte Carlo simulation based on the fuzzy-probabilistic models allows catching both randomness and fuzziness of component outage parameters. A network contingency analysis to identify any overloading or voltage violation in the network is performed once obtained the system states by Monte Carlo simulation. This is followed by a remedial action algorithm, based on optimal power flow, to reschedule generations and alleviate constraint violations and, at the same time, to avoid any load curtailment, if possible, or, otherwise, to minimize the total load curtailment, for the states identified by the contingency analysis. In order to illustrate the application of the proposed methodology to a practical case, the paper will include a case study for the Reliability Test System (RTS) 1996 IEEE 24 BUS.
Resumo:
The analysis of the Higgs boson data by the ATLAS and CMS Collaborations appears to exhibit an excess of h -> gamma gamma events above the Standard Model (SM) expectations, whereas no significant excess is observed in h -> ZZ* -> four lepton events, albeit with large statistical uncertainty due to the small data sample. These results (assuming they persist with further data) could be explained by a pair of nearly mass-degenerate scalars, one of which is an SM-like Higgs boson and the other is a scalar with suppressed couplings to W+W- and ZZ. In the two-Higgs-doublet model, the observed gamma gamma and ZZ* -> four lepton data can be reproduced by an approximately degenerate CP-even (h) and CP-odd (A) Higgs boson for values of sin (beta - alpha) near unity and 0: 70 less than or similar to tan beta less than or similar to 1. An enhanced gamma gamma signal can also arise in cases where m(h) similar or equal to m(H), m(H) similar or equal to m(A), or m(h) similar or equal to m(H) similar or equal to m(A). Since the ZZ* -> 4 leptons signal derives primarily from an SM-like Higgs boson whereas the gamma gamma signal receives contributions from two (or more) nearly mass-degenerate states, one would expect a slightly different invariant mass peak in the ZZ* -> four lepton and gamma gamma channels. The phenomenological consequences of such models can be tested with additional Higgs data that will be collected at the LHC in the near future. DOI: 10.1103/PhysRevD.87.055009.
Resumo:
This thesis presents the Fuzzy Monte Carlo Model for Transmission Power Systems Reliability based studies (FMC-TRel) methodology, which is based on statistical failure and repair data of the transmission power system components and uses fuzzyprobabilistic modeling for system component outage parameters. Using statistical records allows developing the fuzzy membership functions of system component outage parameters. The proposed hybrid method of fuzzy set and Monte Carlo simulation based on the fuzzy-probabilistic models allows catching both randomness and fuzziness of component outage parameters. A network contingency analysis to identify any overloading or voltage violation in the network is performed once obtained the system states. This is followed by a remedial action algorithm, based on Optimal Power Flow, to reschedule generations and alleviate constraint violations and, at the same time, to avoid any load curtailment, if possible, or, otherwise, to minimize the total load curtailment, for the states identified by the contingency analysis. For the system states that cause load curtailment, an optimization approach is applied to reduce the probability of occurrence of these states while minimizing the costs to achieve that reduction. This methodology is of most importance for supporting the transmission system operator decision making, namely in the identification of critical components and in the planning of future investments in the transmission power system. A case study based on Reliability Test System (RTS) 1996 IEEE 24 Bus is presented to illustrate with detail the application of the proposed methodology.
Resumo:
The mechanisms of speech production are complex and have been raising attention from researchers of both medical and computer vision fields. In the speech production mechanism, the articulator’s study is a complex issue, since they have a high level of freedom along this process, namely the tongue, which instigates a problem in its control and observation. In this work it is automatically characterized the tongues shape during the articulation of the oral vowels of Portuguese European by using statistical modeling on MR-images. A point distribution model is built from a set of images collected during artificially sustained articulations of Portuguese European sounds, which can extract the main characteristics of the motion of the tongue. The model built in this work allows under standing more clearly the dynamic speech events involved during sustained articulations. The tongue shape model built can also be useful for speech rehabilitation purposes, specifically to recognize the compensatory movements of the articulators during speech production.
Resumo:
A number of characteristics are boosting the eagerness of extending Ethernet to also cover factory-floor distributed real-time applications. Full-duplex links, non-blocking and priority-based switching, bandwidth availability, just to mention a few, are characteristics upon which that eagerness is building up. But, will Ethernet technologies really manage to replace traditional Fieldbus networks? Ethernet technology, by itself, does not include features above the lower layers of the OSI communication model. In the past few years, it is particularly significant the considerable amount of work that has been devoted to the timing analysis of Ethernet-based technologies. It happens, however, that the majority of those works are restricted to the analysis of sub-sets of the overall computing and communication system, thus without addressing timeliness at a holistic level. To this end, we are addressing a few inter-linked research topics with the purpose of setting a framework for the development of tools suitable to extract temporal properties of Commercial-Off-The-Shelf (COTS) Ethernet-based factory-floor distributed systems. This framework is being applied to a specific COTS technology, Ethernet/IP. In this paper, we reason about the modelling and simulation of Ethernet/IP-based systems, and on the use of statistical analysis techniques to provide usable results. Discrete event simulation models of a distributed system can be a powerful tool for the timeliness evaluation of the overall system, but particular care must be taken with the results provided by traditional statistical analysis techniques.
Resumo:
Não existe uma definição única de processo de memória de longo prazo. Esse processo é geralmente definido como uma série que possui um correlograma decaindo lentamente ou um espectro infinito de frequência zero. Também se refere que uma série com tal propriedade é caracterizada pela dependência a longo prazo e por não periódicos ciclos longos, ou que essa característica descreve a estrutura de correlação de uma série de longos desfasamentos ou que é convencionalmente expressa em termos do declínio da lei-potência da função auto-covariância. O interesse crescente da investigação internacional no aprofundamento do tema é justificado pela procura de um melhor entendimento da natureza dinâmica das séries temporais dos preços dos ativos financeiros. Em primeiro lugar, a falta de consistência entre os resultados reclama novos estudos e a utilização de várias metodologias complementares. Em segundo lugar, a confirmação de processos de memória longa tem implicações relevantes ao nível da (1) modelação teórica e econométrica (i.e., dos modelos martingale de preços e das regras técnicas de negociação), (2) dos testes estatísticos aos modelos de equilíbrio e avaliação, (3) das decisões ótimas de consumo / poupança e de portefólio e (4) da medição de eficiência e racionalidade. Em terceiro lugar, ainda permanecem questões científicas empíricas sobre a identificação do modelo geral teórico de mercado mais adequado para modelar a difusão das séries. Em quarto lugar, aos reguladores e gestores de risco importa saber se existem mercados persistentes e, por isso, ineficientes, que, portanto, possam produzir retornos anormais. O objetivo do trabalho de investigação da dissertação é duplo. Por um lado, pretende proporcionar conhecimento adicional para o debate da memória de longo prazo, debruçando-se sobre o comportamento das séries diárias de retornos dos principais índices acionistas da EURONEXT. Por outro lado, pretende contribuir para o aperfeiçoamento do capital asset pricing model CAPM, considerando uma medida de risco alternativa capaz de ultrapassar os constrangimentos da hipótese de mercado eficiente EMH na presença de séries financeiras com processos sem incrementos independentes e identicamente distribuídos (i.i.d.). O estudo empírico indica a possibilidade de utilização alternativa das obrigações do tesouro (OT’s) com maturidade de longo prazo no cálculo dos retornos do mercado, dado que o seu comportamento nos mercados de dívida soberana reflete a confiança dos investidores nas condições financeiras dos Estados e mede a forma como avaliam as respetiva economias com base no desempenho da generalidade dos seus ativos. Embora o modelo de difusão de preços definido pelo movimento Browniano geométrico gBm alegue proporcionar um bom ajustamento das séries temporais financeiras, os seus pressupostos de normalidade, estacionariedade e independência das inovações residuais são adulterados pelos dados empíricos analisados. Por isso, na procura de evidências sobre a propriedade de memória longa nos mercados recorre-se à rescaled-range analysis R/S e à detrended fluctuation analysis DFA, sob abordagem do movimento Browniano fracionário fBm, para estimar o expoente Hurst H em relação às séries de dados completas e para calcular o expoente Hurst “local” H t em janelas móveis. Complementarmente, são realizados testes estatísticos de hipóteses através do rescaled-range tests R/S , do modified rescaled-range test M - R/S e do fractional differencing test GPH. Em termos de uma conclusão única a partir de todos os métodos sobre a natureza da dependência para o mercado acionista em geral, os resultados empíricos são inconclusivos. Isso quer dizer que o grau de memória de longo prazo e, assim, qualquer classificação, depende de cada mercado particular. No entanto, os resultados gerais maioritariamente positivos suportam a presença de memória longa, sob a forma de persistência, nos retornos acionistas da Bélgica, Holanda e Portugal. Isto sugere que estes mercados estão mais sujeitos a maior previsibilidade (“efeito José”), mas também a tendências que podem ser inesperadamente interrompidas por descontinuidades (“efeito Noé”), e, por isso, tendem a ser mais arriscados para negociar. Apesar da evidência de dinâmica fractal ter suporte estatístico fraco, em sintonia com a maior parte dos estudos internacionais, refuta a hipótese de passeio aleatório com incrementos i.i.d., que é a base da EMH na sua forma fraca. Atendendo a isso, propõem-se contributos para aperfeiçoamento do CAPM, através da proposta de uma nova fractal capital market line FCML e de uma nova fractal security market line FSML. A nova proposta sugere que o elemento de risco (para o mercado e para um ativo) seja dado pelo expoente H de Hurst para desfasamentos de longo prazo dos retornos acionistas. O expoente H mede o grau de memória de longo prazo nos índices acionistas, quer quando as séries de retornos seguem um processo i.i.d. não correlacionado, descrito pelo gBm(em que H = 0,5 , confirmando- se a EMH e adequando-se o CAPM), quer quando seguem um processo com dependência estatística, descrito pelo fBm(em que H é diferente de 0,5, rejeitando-se a EMH e desadequando-se o CAPM). A vantagem da FCML e da FSML é que a medida de memória de longo prazo, definida por H, é a referência adequada para traduzir o risco em modelos que possam ser aplicados a séries de dados que sigam processos i.i.d. e processos com dependência não linear. Então, estas formulações contemplam a EMH como um caso particular possível.
Resumo:
Dissertação apresentada na Faculdade de Ciências e Tecnologia da Universidade Nova de Lisboa para a obtenção do grau de Mestre em Engenharia do Ambiente
Resumo:
Characteristics of tunable wavelength pi'n/pin filters based on a-SiC:H multilayered stacked cells are studied both experimentally and theoretically. Results show that the device combines the demultiplexing operation with the simultaneous photodetection and self amplification of the signal. An algorithm to decode the multiplex signal is established. A capacitive active band-pass filter model is presented and supported by an electrical simulation of the state variable filter circuit. Experimental and simulated results show that the device acts as a state variable filter. It combines the properties of active high-pass and low-pass filter sections into a capacitive active band-pass filter using a changing capacitance to control the power delivered to the load.
Resumo:
P and S receiver functions (PRF and SRF) from 19 seismograph stations in the Gibraltar Arc and the Iberian Massif reveal new details of the regional deep structure. Within the high-velocity mantle body below southern Spain the 660-km discontinuity is depressed by at least 20 km. The Ps phase from the 410-km discontinuity is missing at most stations in the Gibraltar Arc. A thin (similar to 50 km) low-S-velocity layer atop the 410-km discontinuity is found under the Atlantic margin. At most stations the S410p phase in the SRFs arrives 1.0-2.5 s earlier than predicted by IASP91 model, but, for the propagation paths through the upper mantle below southern Spain, the arrivals of S410p are delayed by up to +1.5 s. The early arrivals can be explained by elevated Vp/Vs ratio in the upper mantle or by a depressed 410-km discontinuity. The positive residuals are indicative of a low (similar to 1.7 versus similar to 1.8 in IASP91) Vp/Vs ratio. Previously, the low ratio was found in depleted lithosphere of Precambrian cratons. From simultaneous inversion of the PRFs and SRFs we recognize two types of the mantle: 'continental' and 'oceanic'. In the 'continental' upper mantle the S-wave velocity in the high-velocity lid is 4.4-4.5 km s(-1), the S-velocity contrast between the lid and the underlying mantle is often near the limit of resolution (0.1 km s(-1)), and the bottom of the lid is at a depth reaching 90 100 km. In the 'oceanic' domain, the S-wave velocities in the lid and the underlying mantle are typically 4.2-4.3 and similar to 4.0 km s(-1), respectively. The bottom of the lid is at a shallow depth (around 50 km), and at some locations the lid is replaced by a low S-wave velocity layer. The narrow S-N-oriented band of earthquakes at depths from 70 to 120 km in the Alboran Sea is in the 'continental' domain, near the boundary between the 'continental' and 'oceanic' domains, and the intermediate seismicity may be an effect of ongoing destruction of the continental lithosphere.
Resumo:
Hyperspectral remote sensing exploits the electromagnetic scattering patterns of the different materials at specific wavelengths [2, 3]. Hyperspectral sensors have been developed to sample the scattered portion of the electromagnetic spectrum extending from the visible region through the near-infrared and mid-infrared, in hundreds of narrow contiguous bands [4, 5]. The number and variety of potential civilian and military applications of hyperspectral remote sensing is enormous [6, 7]. Very often, the resolution cell corresponding to a single pixel in an image contains several substances (endmembers) [4]. In this situation, the scattered energy is a mixing of the endmember spectra. A challenging task underlying many hyperspectral imagery applications is then decomposing a mixed pixel into a collection of reflectance spectra, called endmember signatures, and the corresponding abundance fractions [8–10]. Depending on the mixing scales at each pixel, the observed mixture is either linear or nonlinear [11, 12]. Linear mixing model holds approximately when the mixing scale is macroscopic [13] and there is negligible interaction among distinct endmembers [3, 14]. If, however, the mixing scale is microscopic (or intimate mixtures) [15, 16] and the incident solar radiation is scattered by the scene through multiple bounces involving several endmembers [17], the linear model is no longer accurate. Linear spectral unmixing has been intensively researched in the last years [9, 10, 12, 18–21]. It considers that a mixed pixel is a linear combination of endmember signatures weighted by the correspondent abundance fractions. Under this model, and assuming that the number of substances and their reflectance spectra are known, hyperspectral unmixing is a linear problem for which many solutions have been proposed (e.g., maximum likelihood estimation [8], spectral signature matching [22], spectral angle mapper [23], subspace projection methods [24,25], and constrained least squares [26]). In most cases, the number of substances and their reflectances are not known and, then, hyperspectral unmixing falls into the class of blind source separation problems [27]. Independent component analysis (ICA) has recently been proposed as a tool to blindly unmix hyperspectral data [28–31]. ICA is based on the assumption of mutually independent sources (abundance fractions), which is not the case of hyperspectral data, since the sum of abundance fractions is constant, implying statistical dependence among them. This dependence compromises ICA applicability to hyperspectral images as shown in Refs. [21, 32]. In fact, ICA finds the endmember signatures by multiplying the spectral vectors with an unmixing matrix, which minimizes the mutual information among sources. If sources are independent, ICA provides the correct unmixing, since the minimum of the mutual information is obtained only when sources are independent. This is no longer true for dependent abundance fractions. Nevertheless, some endmembers may be approximately unmixed. These aspects are addressed in Ref. [33]. Under the linear mixing model, the observations from a scene are in a simplex whose vertices correspond to the endmembers. Several approaches [34–36] have exploited this geometric feature of hyperspectral mixtures [35]. Minimum volume transform (MVT) algorithm [36] determines the simplex of minimum volume containing the data. The method presented in Ref. [37] is also of MVT type but, by introducing the notion of bundles, it takes into account the endmember variability usually present in hyperspectral mixtures. The MVT type approaches are complex from the computational point of view. Usually, these algorithms find in the first place the convex hull defined by the observed data and then fit a minimum volume simplex to it. For example, the gift wrapping algorithm [38] computes the convex hull of n data points in a d-dimensional space with a computational complexity of O(nbd=2cþ1), where bxc is the highest integer lower or equal than x and n is the number of samples. The complexity of the method presented in Ref. [37] is even higher, since the temperature of the simulated annealing algorithm used shall follow a log( ) law [39] to assure convergence (in probability) to the desired solution. Aiming at a lower computational complexity, some algorithms such as the pixel purity index (PPI) [35] and the N-FINDR [40] still find the minimum volume simplex containing the data cloud, but they assume the presence of at least one pure pixel of each endmember in the data. This is a strong requisite that may not hold in some data sets. In any case, these algorithms find the set of most pure pixels in the data. PPI algorithm uses the minimum noise fraction (MNF) [41] as a preprocessing step to reduce dimensionality and to improve the signal-to-noise ratio (SNR). The algorithm then projects every spectral vector onto skewers (large number of random vectors) [35, 42,43]. The points corresponding to extremes, for each skewer direction, are stored. A cumulative account records the number of times each pixel (i.e., a given spectral vector) is found to be an extreme. The pixels with the highest scores are the purest ones. N-FINDR algorithm [40] is based on the fact that in p spectral dimensions, the p-volume defined by a simplex formed by the purest pixels is larger than any other volume defined by any other combination of pixels. This algorithm finds the set of pixels defining the largest volume by inflating a simplex inside the data. ORA SIS [44, 45] is a hyperspectral framework developed by the U.S. Naval Research Laboratory consisting of several algorithms organized in six modules: exemplar selector, adaptative learner, demixer, knowledge base or spectral library, and spatial postrocessor. The first step consists in flat-fielding the spectra. Next, the exemplar selection module is used to select spectral vectors that best represent the smaller convex cone containing the data. The other pixels are rejected when the spectral angle distance (SAD) is less than a given thresh old. The procedure finds the basis for a subspace of a lower dimension using a modified Gram–Schmidt orthogonalizati on. The selected vectors are then projected onto this subspace and a simplex is found by an MV T pro cess. ORA SIS is oriented to real-time target detection from uncrewed air vehicles using hyperspectral data [46]. In this chapter we develop a new algorithm to unmix linear mixtures of endmember spectra. First, the algorithm determines the number of endmembers and the signal subspace using a newly developed concept [47, 48]. Second, the algorithm extracts the most pure pixels present in the data. Unlike other methods, this algorithm is completely automatic and unsupervised. To estimate the number of endmembers and the signal subspace in hyperspectral linear mixtures, the proposed scheme begins by estimating sign al and noise correlation matrices. The latter is based on multiple regression theory. The signal subspace is then identified by selectin g the set of signal eigenvalue s that best represents the data, in the least-square sense [48,49 ], we note, however, that VCA works with projected and with unprojected data. The extraction of the end members exploits two facts: (1) the endmembers are the vertices of a simplex and (2) the affine transformation of a simplex is also a simplex. As PPI and N-FIND R algorithms, VCA also assumes the presence of pure pixels in the data. The algorithm iteratively projects data on to a direction orthogonal to the subspace spanned by the endmembers already determined. The new end member signature corresponds to the extreme of the projection. The algorithm iterates until all end members are exhausted. VCA performs much better than PPI and better than or comparable to N-FI NDR; yet it has a computational complexity between on e and two orders of magnitude lower than N-FINDR. The chapter is structure d as follows. Section 19.2 describes the fundamentals of the proposed method. Section 19.3 and Section 19.4 evaluate the proposed algorithm using simulated and real data, respectively. Section 19.5 presents some concluding remarks.
Resumo:
OBJECTIVE: The objective of the study was to develop a model for estimating patient 28-day in-hospital mortality using 2 different statistical approaches. DESIGN: The study was designed to develop an outcome prediction model for 28-day in-hospital mortality using (a) logistic regression with random effects and (b) a multilevel Cox proportional hazards model. SETTING: The study involved 305 intensive care units (ICUs) from the basic Simplified Acute Physiology Score (SAPS) 3 cohort. PATIENTS AND PARTICIPANTS: Patients (n = 17138) were from the SAPS 3 database with follow-up data pertaining to the first 28 days in hospital after ICU admission. INTERVENTIONS: None. MEASUREMENTS AND RESULTS: The database was divided randomly into 5 roughly equal-sized parts (at the ICU level). It was thus possible to run the model-building procedure 5 times, each time taking four fifths of the sample as a development set and the remaining fifth as the validation set. At 28 days after ICU admission, 19.98% of the patients were still in the hospital. Because of the different sampling space and outcome variables, both models presented a better fit in this sample than did the SAPS 3 admission score calibrated to vital status at hospital discharge, both on the general population and in major subgroups. CONCLUSIONS: Both statistical methods can be used to model the 28-day in-hospital mortality better than the SAPS 3 admission model. However, because the logistic regression approach is specifically designed to forecast 28-day mortality, and given the high uncertainty associated with the assumption of the proportionality of risks in the Cox model, the logistic regression approach proved to be superior.
Resumo:
Dissertação para obtenção do Grau de Mestre em Engenharia do Ambiente, Perfil de Gestão e Sistemas Ambientais
Resumo:
Dissertação para obtenção do Grau de Doutor em Engenharia Industrial
Resumo:
Dissertação para obtenção do Grau de Mestre em Biotecnologia