42 resultados para Bayesian Mixture Model, Cavalieri Method, Trapezoidal Rule
Resumo:
This paper presents the results from an experimental study of the technical viability of two mixture designs for self-consolidating concrete (SCC) proposed by two Portuguese researchers in a previous work. The objective was to find the best method to provide the required characteristics of SCC in fresh and hardened states without having to experiment with a large number of mixtures. Five SCC mixtures, each with a volume of 25 L (6.61 gal.) were prepared using a forced mixer with a vertical axis for each of three compressive strength targets: 40, 55, and 70 MPa (5.80, 7.98, and 10.15 ksi). The mixtures' fresh state properties of fluidity, segregation resistance ability, and bleeding and blockage tendency, and their hardened state property of compressive strength were compared. For this study, the following tests were performed. slump-flow, V-funnel, L-box, box, and compressive strength. The results of this study made it possible to identify the most influential factors in the design of the SCC mixtures.
Resumo:
Storm- and tsunami-deposits are generated by similar depositional mechanisms making their discrimination hard to establish using classic sedimentologic methods. Here we propose an original approach to identify tsunami-induced deposits by combining numerical simulation and rock magnetism. To test our method, we investigate the tsunami deposit of the Boca do Rio estuary generated by the 1755 earthquake in Lisbon which is well described in the literature. We first test the 1755 tsunami scenario using a numerical inundation model to provide physical parameters for the tsunami wave. Then we use concentration (MS. SIRM) and grain size (chi(ARM), ARM, B1/2, ARM/SIRM) sensitive magnetic proxies coupled with SEM microscopy to unravel the magnetic mineralogy of the tsunami-induced deposit and its associated depositional mechanisms. In order to study the connection between the tsunami deposit and the different sedimentologic units present in the estuary, magnetic data were processed by multivariate statistical analyses. Our numerical simulation show a large inundation of the estuary with flow depths varying from 0.5 to 6 m and run up of similar to 7 m. Magnetic data show a dominance of paramagnetic minerals (quartz) mixed with lesser amount of ferromagnetic minerals, namely titanomagnetite and titanohematite both of a detrital origin and reworked from the underlying units. Multivariate statistical analyses indicate a better connection between the tsunami-induced deposit and a mixture of Units C and D. All these results point to a scenario where the energy released by the tsunami wave was strong enough to overtop and erode important amount of sand from the littoral dune and mixed it with reworked materials from underlying layers at least 1 m in depth. The method tested here represents an original and promising tool to identify tsunami-induced deposits in similar embayed beach environments.
Resumo:
This paper presents a direct power control (DPC) for three-phase matrix converters operating as unified power flow controllers (UPFCs). Matrix converters (MCs) allow the direct ac/ac power conversion without dc energy storage links; therefore, the MC-based UPFC (MC-UPFC) has reduced volume and cost, reduced capacitor power losses, together with higher reliability. Theoretical principles of direct power control (DPC) based on sliding mode control techniques are established for an MC-UPFC dynamic model including the input filter. As a result, line active and reactive power, together with ac supply reactive power, can be directly controlled by selecting an appropriate matrix converter switching state guaranteeing good steady-state and dynamic responses. Experimental results of DPC controllers for MC-UPFC show decoupled active and reactive power control, zero steady-state tracking error, and fast response times. Compared to an MC-UPFC using active and reactive power linear controllers based on a modified Venturini high-frequency PWM modulator, the experimental results of the advanced DPC-MC guarantee faster responses without overshoot and no steady-state error, presenting no cross-coupling in dynamic and steady-state responses.
Resumo:
This study was carried out with the aim of modeling in 2D, in plain strain, the movement of a soft cohesive soil around a pile, in order to enable the determination of stresses resulting along the pile, per unit length. The problem in study fits into the large deformations problem and can be due to landslide, be close of depth excavations, to be near of zones where big loads are applied in the soil, etc. In this study is used an constitutive Elasto-Plastic model with the failure criterion of Mohr-Coulomb to model the soil behavior. The analysis is developed considering the soil in undrained conditions. To the modeling is used the finite element program PLAXIS, which use the Updated Lagrangian - Finite Element Method (UL-FEM). In this work, special attention is given to the soil-pile interaction, where is presented with some detail the formulation of the interface elements and some studies for a better understand of his behavior. It is developed a 2-D model that simulates the effect of depth allowing the study of his influence in the stress distribution around the pile. The results obtained give an important base about how behaves the movement of the soil around a pile, about how work the finite element program PLAXIS and how is the stress distribution around the pile. The analysis demonstrate that the soil-structure interaction modeled with the UL-FEM and interface elements is more appropriate to small deformations problems.
Resumo:
In the two Higgs doublet model, there is the possibility that the vacuum where the universe resides in is metastable. We present the tree-level bounds on the scalar potential parameters which have to be obeyed to prevent that situation. Analytical expressions for those bounds are shown for the most used potential, that with a softly broken Z(2) symmetry. The impact of those bounds on the model's phenomenology is discussed in detail, as well as the importance of the current LHC results in determining whether the vacuum we live in is or is not stable. We demonstrate how the vacuum stability bounds can be obtained for the most generic CP-conserving potential, and provide a simple method to implement them.
Resumo:
Purpose - To compare the image quality and effective dose applying the 10 kVp rule with manual mode acquisition and AEC mode in PA chest X-ray. Method - 68 images (with and without lesions) were acquired using an anthropomorphic chest phantom using a Wolverson Arcoma X-ray unit. These images were compared against a reference image using the 2 alternative forced choice (2AFC) method. The effective dose (E) was calculated using PCXMC software using the exposure parameters and the DAP. The exposure index (lgM provided by Agfa systems) was recorded. Results - Exposure time decreases more when applying the 10 kVp rule with manual mode (50%–28%) when compared with automatic mode (36%–23%). Statistical differences for E between several ionization chambers' combinations for AEC mode were found (p = 0.002). E is lower when using only the right AEC ionization chamber. Considering the image quality there are no statistical differences (p = 0.348) between the different ionization chambers' combinations for AEC mode for images with no lesions. Considering lgM values, it was demonstrated that they were higher when the AEC mode was used compared to the manual mode. It was also observed that lgM values obtained with AEC mode increased as kVp value went up. The image quality scores did not demonstrate statistical significant differences (p = 0.343) for the images with lesions comparing manual with AEC mode. Conclusion - In general the E is lower when manual mode is used. By using the right AEC ionising chamber under the lung the E will be the lowest in comparison to other ionising chambers. The use of the 10 kVp rule did not affect the visibility of the lesions or image quality.
Resumo:
Research on cluster analysis for categorical data continues to develop, new clustering algorithms being proposed. However, in this context, the determination of the number of clusters is rarely addressed. We propose a new approach in which clustering and the estimation of the number of clusters is done simultaneously for categorical data. We assume that the data originate from a finite mixture of multinomial distributions and use a minimum message length criterion (MML) to select the number of clusters (Wallace and Bolton, 1986). For this purpose, we implement an EM-type algorithm (Silvestre et al., 2008) based on the (Figueiredo and Jain, 2002) approach. The novelty of the approach rests on the integration of the model estimation and selection of the number of clusters in a single algorithm, rather than selecting this number based on a set of pre-estimated candidate models. The performance of our approach is compared with the use of Bayesian Information Criterion (BIC) (Schwarz, 1978) and Integrated Completed Likelihood (ICL) (Biernacki et al., 2000) using synthetic data. The obtained results illustrate the capacity of the proposed algorithm to attain the true number of cluster while outperforming BIC and ICL since it is faster, which is especially relevant when dealing with large data sets.
Resumo:
Cluster analysis for categorical data has been an active area of research. A well-known problem in this area is the determination of the number of clusters, which is unknown and must be inferred from the data. In order to estimate the number of clusters, one often resorts to information criteria, such as BIC (Bayesian information criterion), MML (minimum message length, proposed by Wallace and Boulton, 1968), and ICL (integrated classification likelihood). In this work, we adopt the approach developed by Figueiredo and Jain (2002) for clustering continuous data. They use an MML criterion to select the number of clusters and a variant of the EM algorithm to estimate the model parameters. This EM variant seamlessly integrates model estimation and selection in a single algorithm. For clustering categorical data, we assume a finite mixture of multinomial distributions and implement a new EM algorithm, following a previous version (Silvestre et al., 2008). Results obtained with synthetic datasets are encouraging. The main advantage of the proposed approach, when compared to the above referred criteria, is the speed of execution, which is especially relevant when dealing with large data sets.
Resumo:
As operações de separação por adsorção têm vindo a ganhar importância nos últimos anos, especialmente com o desenvolvimento de técnicas de simulação de leitos móveis em colunas, tal como a cromatografia de Leito Móvel Simulado (Simulated Moving Bed, SMB). Esta tecnologia foi desenvolvida no início dos anos 60 como método alternativo ao processo de Leito Móvel Verdadeiro (True Moving Bed, TMB), de modo a resolver vários dos problemas associados ao movimento da fase sólida, usuais nestes métodos de separação cromatográficos de contracorrente. A tecnologia de SMB tem sido amplamente utilizada em escala industrial principalmente nas indústrias petroquímica e de transformação de açúcares e, mais recentemente, na indústria farmacêutica e de química fina. Nas últimas décadas, o crescente interesse na tecnologia de SMB, fruto do alto rendimento e eficiente consumo de solvente, levou à formulação de diferentes modos de operação, ditos não convencionais, que conseguem unidades mais flexíveis, capazes de aumentar o desempenho de separação e alargar ainda mais a gama de aplicação da tecnologia. Um dos exemplos mais estudados e implementados é o caso do processo Varicol, no qual se procede a um movimento assíncrono de portas. Neste âmbito, o presente trabalho foca-se na simulação, análise e avaliação da tecnologia de SMB para dois casos de separação distintos: a separação de uma mistura de frutose-glucose e a separação de uma mistura racémica de pindolol. Para ambos os casos foram considerados e comparados dois modos de operação da unidade de SMB: o modo convencional e o modo Varicol. Desta forma, foi realizada a implementação e simulação de ambos os casos de separação no simulador de processos Aspen Chromatography, mediante a utilização de duas unidades de SMB distintas (SMB convencional e SMB Varicol). Para a separação da mistura frutose-glucose, no quediz respeito à modelização da unidade de SMB convencional, foram utilizadas duas abordagens: a de um leito móvel verdadeiro (modelo TMB) e a de um leito móvel simulado real (modelo SMB). Para a separação da mistura racémica de pindolol foi considerada apenas a modelização pelo modelo SMB. No caso da separação da mistura frutose-glucose, procedeu-se ainda à otimização de ambas as unidades de SMB convencional e Varicol, com o intuito do aumento das suas produtividades. A otimização foi realizada mediante a aplicação de um procedimento de planeamento experimental, onde as experiências foram planeadas, conduzidas e posteriormente analisadas através da análise de variância (ANOVA). A análise estatística permitiu selecionar os níveis dos fatores de controlo de modo a obter melhores resultados para ambas as unidades de SMB.
Resumo:
Dissertação para obtenção do grau de Mestre em Engenharia Electrotécnica Ramo de Energia
Resumo:
Purpose: To compare image quality and effective dose when the 10 kVp rule is applied with manual and AEC mode in PA chest X-ray. Methods and Materials: A total of 68 images (with and without lesions) were acquired of an anthropomorphic chest phantom in a Wolverson Arcoma X-ray unit. The images were evaluated against a reference image using image quality criteria and the 2 alternative forced choice (2 AFC) method by five radiographers. The effective dose was calculated using PCXMC software using the exposure parameters and DAP. The exposure index (lgM) was recorded. Results: Exposure time decreases considerably when applying the 10 kVp rule in manual mode (50%-28%) compared to AEC mode (36%-23%). Statistical differences for effective dose between several AEC modes were found (p=0.002). The effective dose is lower when using only the right AEC ionization chamber. Considering image quality, there are no statistical differences (p=0.348) between the different AEC modes for images with no lesions. Using a higher kVp value the lgM values will also increase. The lgM values showed significant statistical differences (p=0.000). The image quality scores did not present statistically significant differences (p=0.043) for the images with lesions when comparing manual with AEC modes. Conclusion: In general, the dose is lower in the manual mode. By using the right AEC ionising chamber the effective dose will be the lowest in comparison to other ionising chambers. The use of the 10 kVp rule did not affect the detectability of the lesions.
Resumo:
This paper presents the Genetic Algorithms (GA) as an efficient solution for the Okumura-Hata prediction model tuning on railways communications. A method for modelling the propagation model tuning parameters was presented. The algorithm tuning and validation were based on real networks measurements carried out on four different propagation scenarios and several performance indicators were used. It was shown that the proposed GA is able to produce significant improvements over the original model. The algorithm developed is currently been used on real GSM-R network planning process for an enhanced resources usage.
Resumo:
It is important to understand and forecast a typical or a particularly household daily consumption in order to design and size suitable renewable energy systems and energy storage. In this research for Short Term Load Forecasting (STLF) it has been used Artificial Neural Networks (ANN) and, despite the consumption unpredictability, it has been shown the possibility to forecast the electricity consumption of a household with certainty. The ANNs are recognized to be a potential methodology for modeling hourly and daily energy consumption and load forecasting. Input variables such as apartment area, numbers of occupants, electrical appliance consumption and Boolean inputs as hourly meter system were considered. Furthermore, the investigation carried out aims to define an ANN architecture and a training algorithm in order to achieve a robust model to be used in forecasting energy consumption in a typical household. It was observed that a feed-forward ANN and the Levenberg-Marquardt algorithm provided a good performance. For this research it was used a database with consumption records, logged in 93 real households, in Lisbon, Portugal, between February 2000 and July 2001, including both weekdays and weekend. The results show that the ANN approach provides a reliable model for forecasting household electric energy consumption and load profile. © 2014 The Author.
Resumo:
The latest LHC data confirmed the existence of a Higgs-like particle and made interesting measurements on its decays into gamma gamma, ZZ*, WW*, tau(+)tau(-), and b (b) over bar. It is expected that a decay into Z gamma might be measured at the next LHC round, for which there already exists an upper bound. The Higgs-like particle could be a mixture of scalar with a relatively large component of pseudoscalar. We compute the decay of such a mixed state into Z gamma, and we study its properties in the context of the complex two Higgs doublet model, analysing the effect of the current measurements on the four versions of this model. We show that a measurement of the h -> Z gamma rate at a level consistent with the SM can be used to place interesting constraints on the pseudoscalar component. We also comment on the issue of a wrong sign Yukawa coupling for the bottom in Type II models.
Resumo:
We consider a dynamical model of cancer growth including three interacting cell populations of tumor cells, healthy host cells and immune effector cells. For certain parameter choice, the dynamical system displays chaotic motion and by decreasing the response of the immune system to the tumor cells, a boundary crisis leading to transient chaotic dynamics is observed. This means that the system behaves chaotically for a finite amount of time until the unavoidable extinction of the healthy and immune cell populations occurs. Our main goal here is to apply a control method to avoid extinction. For that purpose, we apply the partial control method, which aims to control transient chaotic dynamics in the presence of external disturbances. As a result, we have succeeded to avoid the uncontrolled growth of tumor cells and the extinction of healthy tissue. The possibility of using this method compared to the frequently used therapies is discussed. (C) 2014 Elsevier Ltd. All rights reserved.