926 resultados para Bayesian Mixture Model, Cavalieri Method, Trapezoidal Rule
Resumo:
Storm- and tsunami-deposits are generated by similar depositional mechanisms making their discrimination hard to establish using classic sedimentologic methods. Here we propose an original approach to identify tsunami-induced deposits by combining numerical simulation and rock magnetism. To test our method, we investigate the tsunami deposit of the Boca do Rio estuary generated by the 1755 earthquake in Lisbon which is well described in the literature. We first test the 1755 tsunami scenario using a numerical inundation model to provide physical parameters for the tsunami wave. Then we use concentration (MS. SIRM) and grain size (chi(ARM), ARM, B1/2, ARM/SIRM) sensitive magnetic proxies coupled with SEM microscopy to unravel the magnetic mineralogy of the tsunami-induced deposit and its associated depositional mechanisms. In order to study the connection between the tsunami deposit and the different sedimentologic units present in the estuary, magnetic data were processed by multivariate statistical analyses. Our numerical simulation show a large inundation of the estuary with flow depths varying from 0.5 to 6 m and run up of similar to 7 m. Magnetic data show a dominance of paramagnetic minerals (quartz) mixed with lesser amount of ferromagnetic minerals, namely titanomagnetite and titanohematite both of a detrital origin and reworked from the underlying units. Multivariate statistical analyses indicate a better connection between the tsunami-induced deposit and a mixture of Units C and D. All these results point to a scenario where the energy released by the tsunami wave was strong enough to overtop and erode important amount of sand from the littoral dune and mixed it with reworked materials from underlying layers at least 1 m in depth. The method tested here represents an original and promising tool to identify tsunami-induced deposits in similar embayed beach environments.
Resumo:
This paper presents a direct power control (DPC) for three-phase matrix converters operating as unified power flow controllers (UPFCs). Matrix converters (MCs) allow the direct ac/ac power conversion without dc energy storage links; therefore, the MC-based UPFC (MC-UPFC) has reduced volume and cost, reduced capacitor power losses, together with higher reliability. Theoretical principles of direct power control (DPC) based on sliding mode control techniques are established for an MC-UPFC dynamic model including the input filter. As a result, line active and reactive power, together with ac supply reactive power, can be directly controlled by selecting an appropriate matrix converter switching state guaranteeing good steady-state and dynamic responses. Experimental results of DPC controllers for MC-UPFC show decoupled active and reactive power control, zero steady-state tracking error, and fast response times. Compared to an MC-UPFC using active and reactive power linear controllers based on a modified Venturini high-frequency PWM modulator, the experimental results of the advanced DPC-MC guarantee faster responses without overshoot and no steady-state error, presenting no cross-coupling in dynamic and steady-state responses.
Resumo:
In this paper we present a Constraint Logic Programming (CLP) based model, and hybrid solving method for the Scheduling of Maintenance Activities in the Power Transmission Network. The model distinguishes from others not only because of its completeness but also by the way it models and solves the Electric Constraints. Specifically we present a efficient filtering algorithm for the Electrical Constraints. Furthermore, the solving method improves the pure CLP methods efficiency by integrating a type of Local Search technique with CLP. To test the approach we compare the method results with another method using a 24 bus network, which considerers 42 tasks and 24 maintenance periods.
Resumo:
This paper presents a methodology for distribution networks reconfiguration in outage presence in order to choose the reconfiguration that presents the lower power losses. The methodology is based on statistical failure and repair data of the distribution power system components and uses fuzzy-probabilistic modelling for system component outage parameters. Fuzzy membership functions of system component outage parameters are obtained by statistical records. A hybrid method of fuzzy set and Monte Carlo simulation based on the fuzzy-probabilistic models allows catching both randomness and fuzziness of component outage parameters. Once obtained the system states by Monte Carlo simulation, a logical programming algorithm is applied to get all possible reconfigurations for every system state. In order to evaluate the line flows and bus voltages and to identify if there is any overloading, and/or voltage violation a distribution power flow has been applied to select the feasible reconfiguration with lower power losses. To illustrate the application of the proposed methodology to a practical case, the paper includes a case study that considers a real distribution network.
Fuzzy Monte Carlo mathematical model for load curtailment minimization in transmission power systems
Resumo:
This paper presents a methodology which is based on statistical failure and repair data of the transmission power system components and uses fuzzyprobabilistic modeling for system component outage parameters. Using statistical records allows developing the fuzzy membership functions of system component outage parameters. The proposed hybrid method of fuzzy set and Monte Carlo simulation based on the fuzzy-probabilistic models allows catching both randomness and fuzziness of component outage parameters. A network contingency analysis to identify any overloading or voltage violation in the network is performed once obtained the system states by Monte Carlo simulation. This is followed by a remedial action algorithm, based on optimal power flow, to reschedule generations and alleviate constraint violations and, at the same time, to avoid any load curtailment, if possible, or, otherwise, to minimize the total load curtailment, for the states identified by the contingency analysis. In order to illustrate the application of the proposed methodology to a practical case, the paper will include a case study for the Reliability Test System (RTS) 1996 IEEE 24 BUS.
Resumo:
This paper starts with the analysis of the unusual inherence mechanism, from two aspects: accumulating and human error. We put forward twelve factors affected the decision of the emergency treatment plan in practice and summarized the evaluation index system combining with literature data. Then we screened out eighteen representative indicators by used the FDM expert questionnaire in the first phase. Hereafter, we calculated the weight of evaluation index and sorted them by the FAHP expert questionnaire, and came up with the frame of the evaluation rule by combined with the experience. In the end, the evaluation principles are concluded.
Resumo:
This study was carried out with the aim of modeling in 2D, in plain strain, the movement of a soft cohesive soil around a pile, in order to enable the determination of stresses resulting along the pile, per unit length. The problem in study fits into the large deformations problem and can be due to landslide, be close of depth excavations, to be near of zones where big loads are applied in the soil, etc. In this study is used an constitutive Elasto-Plastic model with the failure criterion of Mohr-Coulomb to model the soil behavior. The analysis is developed considering the soil in undrained conditions. To the modeling is used the finite element program PLAXIS, which use the Updated Lagrangian - Finite Element Method (UL-FEM). In this work, special attention is given to the soil-pile interaction, where is presented with some detail the formulation of the interface elements and some studies for a better understand of his behavior. It is developed a 2-D model that simulates the effect of depth allowing the study of his influence in the stress distribution around the pile. The results obtained give an important base about how behaves the movement of the soil around a pile, about how work the finite element program PLAXIS and how is the stress distribution around the pile. The analysis demonstrate that the soil-structure interaction modeled with the UL-FEM and interface elements is more appropriate to small deformations problems.
Resumo:
In the two Higgs doublet model, there is the possibility that the vacuum where the universe resides in is metastable. We present the tree-level bounds on the scalar potential parameters which have to be obeyed to prevent that situation. Analytical expressions for those bounds are shown for the most used potential, that with a softly broken Z(2) symmetry. The impact of those bounds on the model's phenomenology is discussed in detail, as well as the importance of the current LHC results in determining whether the vacuum we live in is or is not stable. We demonstrate how the vacuum stability bounds can be obtained for the most generic CP-conserving potential, and provide a simple method to implement them.
Resumo:
Environmental pollution continues to be an emerging study field, as there are thousands of anthropogenic compounds mixed in the environment whose possible mechanisms of toxicity and physiological outcomes are of great concern. Developing methods to access and prioritize the screening of these compounds at trace levels in order to support regulatory efforts is, therefore, very important. A methodology based on solid phase extraction followed by derivatization and gas chromatography-mass spectrometry analysis was developed for the assessment of four endocrine disrupting compounds (EDCs) in water matrices: bisphenol A, estrone, 17b-estradiol and 17a-ethinylestradiol. The study was performed, simultaneously, by two different laboratories in order to evaluate the robustness of the method and to increase the quality control over its application in routine analysis. Validation was done according to the International Conference on Harmonisation recommendations and other international guidelines with specifications for the GC-MS methodology. Matrix-induced chromatographic response enhancement was avoided by using matrix-standard calibration solutions and heteroscedasticity has been overtaken by a weighted least squares linear regression model application. Consistent evaluation of key analytical parameters such as extraction efficiency, sensitivity, specificity, linearity, limits of detection and quantification, precision, accuracy and robustness was done in accordance with standards established for acceptance. Finally, the application of the optimized method in the assessment of the selected analytes in environmental samples suggested that it is an expedite methodology for routine analysis of EDC residues in water matrices.
Resumo:
The species abundance distribution (SAD) has been a central focus of community ecology for over fifty years, and is currently the subject of widespread renewed interest. The gambin model has recently been proposed as a model that provides a superior fit to commonly preferred SAD models. It has also been argued that the model's single parameter (α) presents a potentially informative ecological diversity metric, because it summarises the shape of the SAD in a single number. Despite this potential, few empirical tests of the model have been undertaken, perhaps because the necessary methods and software for fitting the model have not existed. Here, we derive a maximum likelihood method to fit the model, and use it to undertake a comprehensive comparative analysis of the fit of the gambin model. The functions and computational code to fit the model are incorporated in a newly developed free-to-download R package (gambin). We test the gambin model using a variety of datasets and compare the fit of the gambin model to fits obtained using the Poisson lognormal, logseries and zero-sum multinomial distributions. We found that gambin almost universally provided a better fit to the data and that the fit was consistent for a variety of sample grain sizes. We demonstrate how α can be used to differentiate intelligibly between community structures of Azorean arthropods sampled in different land use types. We conclude that gambin presents a flexible model capable of fitting a wide variety of observed SAD data, while providing a useful index of SAD form in its single fitted parameter. As such, gambin has wide potential applicability in the study of SADs, and ecology more generally.
Resumo:
The deep-sea environment is difficult to sample, and often only small quantities of samples can be obtained when using less destructive methods than dredging. When working with marine animals that are difficult to sample and with limited quantities of tissue to extract lipids, it is essential to ensure that the used method extracts the maximum possible quantity of lipids. This study evaluates the efficiency of introducing modifications to the method originally described by Bligh & Dyer (1959). This lipid extraction method is broadly used with modifications, although these usually lack proper description and evaluation of increment in lipids. In this study we consider the improvement in terms of amount of lipids extracted by changing the method. Lipid content was determined by gravimetric measurements in eight invertebrates from the deep-sea, including deep-sea hydrothermal vents animals, using three different approaches. Results show increases of 14% to 30% in lipid contents obtained from hydrothermal vent invertebrate tissues and whole animals by placing the samples in methanol for 24 hours before applying the Bligh & Dyer mixture. Efficiency of the extractions using frozen and freeze-dried samples was also compared. For large sponges, the use of lyophilized materials resulted in increases of 3 to 7 times more lipids extracted when compared with extractions using frozen samples.
Resumo:
This thesis presents the Fuzzy Monte Carlo Model for Transmission Power Systems Reliability based studies (FMC-TRel) methodology, which is based on statistical failure and repair data of the transmission power system components and uses fuzzyprobabilistic modeling for system component outage parameters. Using statistical records allows developing the fuzzy membership functions of system component outage parameters. The proposed hybrid method of fuzzy set and Monte Carlo simulation based on the fuzzy-probabilistic models allows catching both randomness and fuzziness of component outage parameters. A network contingency analysis to identify any overloading or voltage violation in the network is performed once obtained the system states. This is followed by a remedial action algorithm, based on Optimal Power Flow, to reschedule generations and alleviate constraint violations and, at the same time, to avoid any load curtailment, if possible, or, otherwise, to minimize the total load curtailment, for the states identified by the contingency analysis. For the system states that cause load curtailment, an optimization approach is applied to reduce the probability of occurrence of these states while minimizing the costs to achieve that reduction. This methodology is of most importance for supporting the transmission system operator decision making, namely in the identification of critical components and in the planning of future investments in the transmission power system. A case study based on Reliability Test System (RTS) 1996 IEEE 24 Bus is presented to illustrate with detail the application of the proposed methodology.
Resumo:
O rio Febros é um pequeno curso de água, situado no concelho de Vila Nova de Gaia, com cerca de 15 km de extensão, cuja bacia hidrográfica ocupa uma área de aproximadamente 35,4 km2. Nasce em Seixezelo e desagua na margem esquerda do Rio Douro no Cais do Esteiro, em Avintes. Em Maio de 2008, um acidente de viação teve como consequência o derrame de cerca de quatro toneladas de ácido clorídrico que rapidamente convergiu às águas do rio. Apenas um dia depois, o pH desceu para três e muitos foram os peixes que morreram. A solução adoptada para evitar o desaire foi introduzir milhares de litros de água de modo a diluir o ácido presente, ao longo de todo o curso de água. Tal facto não evitou a destruição de parte de um ecossistema, que ainda nos dias de hoje se encontra em recuperação. De forma a avaliar-se o impacto destas possíveis perturbações sejam estas de origem antropogénica ou natural é necessário possuir conhecimentos dos processos químicos tais como a advecção, a mistura devida à dispersão e a transferência de massa ar/água. Estes processos irão determinar o movimento e destino das substâncias que poderão ser descarregadas no rio. Para tal, recorrer-se-á ao estudo hidrogeométrico do curso de água assim como ao estudo do comportamento de um marcador, simulando uma possível descarga. A rodamina WT será o marcador a ser utilizado devido à panóplia de características ambientalmente favoráveis. Os estudos de campo com este corante, realizados em sequência de descarga previamente estudada, fornecem uma das melhores fontes de informação para verificação e validação de modelos hidráulicos utilizados em estudos de qualidade de águas e protecção ambiental. Escolheram-se dois pontos de descarga no Febros, um em Casal Drijo e outro no Parque Biológico de Gaia, possuindo cada um deles, a jusante, duas estações de monitorização. Pelo modelo ADE os valores obtidos para o coeficiente de dispersão longitudinal para as estações Pontão d’ Alheira, Pinheiral, Menesas e Giestas foram, respectivamente, 0,3622; 0,5468; 1,6832 e 1,7504 m2/s. Para a mesma sequência de estações, os valores da velocidade de escoamento obtidos neste trabalho experimental foram de 0,0633; 0,0684; 0,1548 e 0,1645 m/s. Quanto ao modelo TS, os valores obtidos para o coeficiente de dispersão longitudinal para as estações Pontão d’ Alheira, Pinheiral, Menesas e Giestas foram, respectivamente, 0,2339; 0,1618; 0,5057e 1,1320 m2/s. Para a mesma sequência de estações, os valores da velocidade de escoamento obtidos neste trabalho experimental foram de 0,0652; 0,0775; 0,1891 e 0,1676 m/s. Os resultados foram ajustados por um método directo, o método dos momentos, e por dois métodos indirectos, os modelos ADE e TS. O melhor ajuste corresponde ao modelo TS onde os valores do coeficiente de dispersão longitudinal e da velocidade de escoamento são aqueles que melhor se aproximam da realidade. Quanto ao método dos momentos, o valor estimado para a velocidade é de 0,162 m/s e para o coeficiente de dispersão longitudinal de 9,769 m2/s. Não obstante, a compreensão da hidrodinâmica do rio e das suas características, bem como a adequação de modelos matemáticos no tratamento de resultados formam uma estratégia de protecção ambiental inerente a futuros impactos que possam suceder.
Resumo:
Purpose - To compare the image quality and effective dose applying the 10 kVp rule with manual mode acquisition and AEC mode in PA chest X-ray. Method - 68 images (with and without lesions) were acquired using an anthropomorphic chest phantom using a Wolverson Arcoma X-ray unit. These images were compared against a reference image using the 2 alternative forced choice (2AFC) method. The effective dose (E) was calculated using PCXMC software using the exposure parameters and the DAP. The exposure index (lgM provided by Agfa systems) was recorded. Results - Exposure time decreases more when applying the 10 kVp rule with manual mode (50%–28%) when compared with automatic mode (36%–23%). Statistical differences for E between several ionization chambers' combinations for AEC mode were found (p = 0.002). E is lower when using only the right AEC ionization chamber. Considering the image quality there are no statistical differences (p = 0.348) between the different ionization chambers' combinations for AEC mode for images with no lesions. Considering lgM values, it was demonstrated that they were higher when the AEC mode was used compared to the manual mode. It was also observed that lgM values obtained with AEC mode increased as kVp value went up. The image quality scores did not demonstrate statistical significant differences (p = 0.343) for the images with lesions comparing manual with AEC mode. Conclusion - In general the E is lower when manual mode is used. By using the right AEC ionising chamber under the lung the E will be the lowest in comparison to other ionising chambers. The use of the 10 kVp rule did not affect the visibility of the lesions or image quality.
Resumo:
Research on cluster analysis for categorical data continues to develop, new clustering algorithms being proposed. However, in this context, the determination of the number of clusters is rarely addressed. We propose a new approach in which clustering and the estimation of the number of clusters is done simultaneously for categorical data. We assume that the data originate from a finite mixture of multinomial distributions and use a minimum message length criterion (MML) to select the number of clusters (Wallace and Bolton, 1986). For this purpose, we implement an EM-type algorithm (Silvestre et al., 2008) based on the (Figueiredo and Jain, 2002) approach. The novelty of the approach rests on the integration of the model estimation and selection of the number of clusters in a single algorithm, rather than selecting this number based on a set of pre-estimated candidate models. The performance of our approach is compared with the use of Bayesian Information Criterion (BIC) (Schwarz, 1978) and Integrated Completed Likelihood (ICL) (Biernacki et al., 2000) using synthetic data. The obtained results illustrate the capacity of the proposed algorithm to attain the true number of cluster while outperforming BIC and ICL since it is faster, which is especially relevant when dealing with large data sets.