1000 resultados para Newtonian model
Resumo:
This work addresses both experimental and numerical analyses regarding the tensile behaviour of CFRP single-strap repairs. Two fundamental geometrical parameters were studied: overlap length and patch thickness. The numerical model used ABAQUS® software and a developed cohesive mixed-mode damage model adequate for ductile adhesives, and implemented within interface finite elements. Stress analyses and strength predictions were carried out. Experimental and numerical comparisons were performed on failure modes, failure load and equivalent stiffness of the repair. Good correlation was found between experimental and numerical results, showing that the proposed model can be successfully applied to bonded joints or repairs.
Resumo:
Dissertação apresentada na Faculdade de Ciências e Tecnologia da Universidade Nova de Lisboa para obtenção do grau de Mestre em Engenharia Informática
Resumo:
In Part I of the present work we describe the viscosity measurements performed on tris(2-ethylhexyl) trimellitate or 1,2,4-benzenetricarboxylic acid, tris(2-ethylhexyl) ester (TOTM) up to 65 MPa and at six temperatures from (303 to 373)K, using a new vibrating-wire instrument. The main aim is to contribute to the proposal of that liquid as a potential reference fluid for high viscosity, high pressure and high temperature. The present Part II is dedicated to report the density measurements of TOTM necessary, not only to compute the viscosity data presented in Part I, but also as complementary data for the mentioned proposal. The present density measurements were obtained using a vibrating U-tube densimeter, model DMA HP, using model DMA5000 as a reading unit, both instruments from Anton Paar GmbH. The measurements were performed along five isotherms from (293 to 373)K and at eleven different pressures up to 68 MPa. As far as the authors are aware, the viscosity and density results are the first, above atmospheric pressure, to be published for TOTM. Due to TOTM's high viscosity, its density data were corrected for the viscosity effect on the U-tube density measurements. This effect was estimated using two Newtonian viscosity standard liquids, 20 AW and 200 GW. The density data were correlated with temperature and pressure using a modified Tait equation. The expanded uncertainty of the present density results is estimated as +/- 0.2% at a 95% confidence level. Those results were correlated with temperature and pressure by a modified Tait equation, with deviations within +/- 0.25%. Furthermore, the isothermal compressibility, K-T, and the isobaric thermal expansivity, alpha(p), were obtained by derivation of the modified Tait equation used for correlating the density data. The corresponding uncertainties, at a 95% confidence level, are estimated to be less than +/- 1.5% and +/- 1.2%, respectively. No isobaric thermal expansivity and isothermal compressibility for TOTM were found in the literature. (C) 2014 Elsevier B.V. All rights reserved.
Resumo:
The latest LHC data confirmed the existence of a Higgs-like particle and made interesting measurements on its decays into gamma gamma, ZZ*, WW*, tau(+)tau(-), and b (b) over bar. It is expected that a decay into Z gamma might be measured at the next LHC round, for which there already exists an upper bound. The Higgs-like particle could be a mixture of scalar with a relatively large component of pseudoscalar. We compute the decay of such a mixed state into Z gamma, and we study its properties in the context of the complex two Higgs doublet model, analysing the effect of the current measurements on the four versions of this model. We show that a measurement of the h -> Z gamma rate at a level consistent with the SM can be used to place interesting constraints on the pseudoscalar component. We also comment on the issue of a wrong sign Yukawa coupling for the bottom in Type II models.
Resumo:
We investigate the structural and thermodynamic properties of a model of particles with 2 patches of type A and 10 patches of type B. Particles are placed on the sites of a face centered cubic lattice with the patches oriented along the nearest neighbor directions. The competition between the self- assembly of chains, rings, and networks on the phase diagram is investigated by carrying out a systematic investigation of this class of models, using an extension ofWertheim's theory for associating fluids and Monte Carlo numerical simulations. We varied the ratio r epsilon(AB)/epsilon(AA) of the interaction between patches A and B, epsilon(AB), and between A patches, epsilon(AA) (epsilon(BB) is set to theta) as well as the relative position of the A patches, i.e., the angle. between the (lattice) directions of the A patches. We found that both r and theta (60 degrees, 90 degrees, or 120 degrees) have a profound effect on the phase diagram. In the empty fluid regime (r < 1/2) the phase diagram is reentrant with a closed miscibility loop. The region around the lower critical point exhibits unusual structural and thermodynamic behavior determined by the presence of relatively short rings. The agreement between the results of theory and simulation is excellent for theta = 120 degrees but deteriorates as. decreases, revealing the need for new theoretical approaches to describe the structure and thermodynamics of systems dominated by small rings. (C) 2014 AIP Publishing LLC.
Resumo:
The paper proposes a Flexibility Requirements Model and a Factory Templates Framework to support the dynamic Virtual Organization decision-makers in order to reach effective response to the emergent business opportunities ensuring profitability. Through the construction and analysis of the flexibility requirements model, the network managers can achieve and conceive better strategies to model and breed new dynamic VOs. This paper also presents the leagility concept as a new paradigm fit to equip the network management with a hybrid approach that better tackle the performance challenges imposed by the new and competitive business environments.
Resumo:
The erosion depth profile of planar targets in balanced and unbalanced magnetron cathodes with cylindrical symmetry is measured along the target radius. The magnetic fields have rotational symmetry. The horizontal and vertical components of the magnetic field B are measured at points above the cathode target with z = 2 x 10(-3) m. The experimental data reveal that the target erosion depth profile is a function of the angle. made by B with a horizontal line defined by z = 2 x 10(-3) m. To explain this dependence a simplified model of the discharge is developed. In the scope of the model, the pathway lengths of the secondary electrons in the pre-sheath region are calculated by analytical integration of the Lorentz differential equations. Weighting these lengths by using the distribution law of the mean free path of the secondary electrons, we estimate the densities of the ionizing events over the cathode and the relative flux of the sputtered atoms. The expression so deduced correlates for the first time the erosion depth profile of the target with the angle theta. The model shows reasonably good fittings to the experimental target erosion depth profiles confirming that ionization occurs mainly in the pre-sheath zone.
Resumo:
We consider a dynamical model of cancer growth including three interacting cell populations of tumor cells, healthy host cells and immune effector cells. For certain parameter choice, the dynamical system displays chaotic motion and by decreasing the response of the immune system to the tumor cells, a boundary crisis leading to transient chaotic dynamics is observed. This means that the system behaves chaotically for a finite amount of time until the unavoidable extinction of the healthy and immune cell populations occurs. Our main goal here is to apply a control method to avoid extinction. For that purpose, we apply the partial control method, which aims to control transient chaotic dynamics in the presence of external disturbances. As a result, we have succeeded to avoid the uncontrolled growth of tumor cells and the extinction of healthy tissue. The possibility of using this method compared to the frequently used therapies is discussed. (C) 2014 Elsevier Ltd. All rights reserved.
Resumo:
In cluster analysis, it can be useful to interpret the partition built from the data in the light of external categorical variables which are not directly involved to cluster the data. An approach is proposed in the model-based clustering context to select a number of clusters which both fits the data well and takes advantage of the potential illustrative ability of the external variables. This approach makes use of the integrated joint likelihood of the data and the partitions at hand, namely the model-based partition and the partitions associated to the external variables. It is noteworthy that each mixture model is fitted by the maximum likelihood methodology to the data, excluding the external variables which are used to select a relevant mixture model only. Numerical experiments illustrate the promising behaviour of the derived criterion. © 2014 Springer-Verlag Berlin Heidelberg.
Resumo:
Dissertation submitted in partial fulfilment of the requirements for the Degree of Master of Science in Geospatial Technologies
Resumo:
This article is a short introduction on how to use Modellus (a computer package that is freely available on the Internet and used in the IOP Advancing Physics course) to build physics games using Newton’s laws, expressed as differential equations. Solving systems of differential equations is beyond most secondary-school or first-year college students. However, with Modellus, the solution is simply the output of the usual physical reasoning: define the force law, compute its magnitude and components, use it to obtain the acceleration components, then the velocity components and, finally, use the velocity components to find the coordinates.
Resumo:
We consider a Bertrand duopoly model with unknown costs. The firms' aim is to choose the price of its product according to the well-known concept of Bayesian Nash equilibrium. The chooses are made simultaneously by both firms. In this paper, we suppose that each firm has two different technologies, and uses one of them according to a certain probability distribution. The use of either one or the other technology affects the unitary production cost. We show that this game has exactly one Bayesian Nash equilibrium. We analyse the advantages, for firms and for consumers, of using the technology with highest production cost versus the one with cheapest production cost. We prove that the expected profit of each firm increases with the variance of its production costs. We also show that the expected price of each good increases with both expected production costs, being the effect of the expected production costs of the rival dominated by the effect of the own expected production costs.
Resumo:
We consider a trade policy model, where the costs of the home firm are private information but can be signaled through the output levels of the firm to a foreign competitor and a home policymaker. We study the influences of the non-homogeneity of the goods and of the uncertainty on the production costs of the home firm in the signalling strategies by the home firm. We show that some results obtained for homogeneous goods are not robust under non-homogeneity.
Resumo:
Characteristics of tunable wavelength pi'n/pin filters based on a-SiC:H multilayered stacked cells are studied both experimentally and theoretically. Results show that the device combines the demultiplexing operation with the simultaneous photodetection and self amplification of the signal. An algorithm to decode the multiplex signal is established. A capacitive active band-pass filter model is presented and supported by an electrical simulation of the state variable filter circuit. Experimental and simulated results show that the device acts as a state variable filter. It combines the properties of active high-pass and low-pass filter sections into a capacitive active band-pass filter using a changing capacitance to control the power delivered to the load.
Resumo:
In this article, we calibrate the Vasicek interest rate model under the risk neutral measure by learning the model parameters using Gaussian processes for machine learning regression. The calibration is done by maximizing the likelihood of zero coupon bond log prices, using mean and covariance functions computed analytically, as well as likelihood derivatives with respect to the parameters. The maximization method used is the conjugate gradients. The only prices needed for calibration are zero coupon bond prices and the parameters are directly obtained in the arbitrage free risk neutral measure.