260 resultados para Analytical Model
em Biblioteca Digital da Produção Intelectual da Universidade de São Paulo (BDPI/USP)
Resumo:
A large percentage of pile caps support only one column, and the pile caps in turn are supported by only a few piles. These are typically short and deep members with overall span-depth ratios of less than 1.5. Codes of practice do not provide uniform treatment for the design of these types of pile caps. These members have traditionally been designed as beams spanning between piles with the depth selected to avoid shear failures and the amount of longitudinal reinforcement selected to provide sufficient flexural capacity as calculated by the engineering beam theory. More recently, the strut-and-tie method has been used for the design of pile caps (disturbed or D-region) in which the load path is envisaged to be a three-dimensional truss, with compressive forces being supported by concrete compressive struts between the column and piles and tensile forces being carried by reinforcing steel located between piles. Both of these models have not provided uniform factors of safety against failure or been able to predict whether failure will occur by flexure (ductile mode) or shear (fragile mode). In this paper, an analytical model based on the strut-and-tie approach is presented. The proposed model has been calibrated using an extensive experimental database of pile caps subjected to compression and evaluated analytically for more complex loading conditions. It has been proven to be applicable across a broad range of test data and can predict the failures modes, cracking, yielding, and failure loads of four-pile caps with reasonable accuracy.
Resumo:
Consider N sites randomly and uniformly distributed in a d-dimensional hypercube. A walker explores this disordered medium going to the nearest site, which has not been visited in the last mu (memory) steps. The walker trajectory is composed of a transient part and a periodic part (cycle). For one-dimensional systems, travelers can or cannot explore all available space, giving rise to a crossover between localized and extended regimes at the critical memory mu(1) = log(2) N. The deterministic rule can be softened to consider more realistic situations with the inclusion of a stochastic parameter T (temperature). In this case, the walker movement is driven by a probability density function parameterized by T and a cost function. The cost function increases as the distance between two sites and favors hops to closer sites. As the temperature increases, the walker can escape from cycles that are reminiscent of the deterministic nature and extend the exploration. Here, we report an analytical model and numerical studies of the influence of the temperature and the critical memory in the exploration of one-dimensional disordered systems.
Resumo:
This paper presents a study of a specific type of beam-to-column connection for precast concrete structures. Furthermore, an analytical model to determine the strength and the stiffness of the connection, based on test results of two prototypes, is proposed. To evaluate the influence of the strength and stiffness of the connection on the behaviour of the structure, the results of numerical simulations of a typical multi-storey building with semi-rigid connections are also presented and compared with the results using pinned and rigid connections. The main conclusions are: (a) the proposed design model can reasonably evaluate the studied connection strength; (b) the evaluation of strength is more accurate than that of stiffness; (c) for a typical structure, it is possible to increase the number of storeys of the structure from two to four with lower horizontal displacement at the top, and only a small increase of the column base bending moment by replacing the pinned connections with semi-rigid ones; and (d) although there is significant uncertainty in the connection stiffness, the results show that the displacements at the top of the structure, and the column base moments present low susceptibility deviations to this parameter.
Resumo:
This paper presents a study on the compressive behavior of steel fiber-reinforced concrete. In this study, an analytical model for stress-strain curve for steel fiber-reinforced concrete is derived for concretes with strengths of 40 MPa and 60 MPa at the age of 28 days. Those concretes were reinforced with steel fibers with hooked ends 35 mm long and with aspect ratio of 65. The analytical model was compared with some experimental stress-strain curves and with some models reported in technical literature. Also, the accuracy of the proposed stress-strain curve was evaluated by comparison of the area under stress-strain curve. The results showed good agreement between analytical and experimental data and the benefits of the using of fibers in the compressive behavior of concrete.
Resumo:
The application of functionally graded material (FGM) concept to piezoelectric transducers allows the design of composite transducers without interfaces, due to the continuous change of property values. Thus, large improvements can be achieved, as reduction of stress concentration, increasing of bonding strength, and bandwidth. This work proposes to design and to model FGM piezoelectric transducers and to compare their performance with non-FGM ones. Analytical and finite element (FE) modeling of FGM piezoelectric transducers radiating a plane pressure wave in fluid medium are developed and their results are compared. The ANSYS software is used for the FE modeling. The analytical model is based on FGM-equivalent acoustic transmission-line model, which is implemented using MATLAB software. Two cases are considered: (i) the transducer emits a pressure wave in water and it is composed of a graded piezoceramic disk, and backing and matching layers made of homogeneous materials; (ii) the transducer has no backing and matching layer; in this case, no external load is simulated. Time and frequency pressure responses are obtained through a transient analysis. The material properties are graded along thickness direction. Linear and exponential gradation functions are implemented to illustrate the influence of gradation on the transducer pressure response, electrical impedance, and resonance frequencies. (C) 2009 Elsevier B. V. All rights reserved.
Resumo:
This letter presents the properties of nMOS junctionless nanowire transistors (JNTs) under cryogenic operation. Experimental results of drain current, subthreshold slope, maximum transconductance at low electric field, and threshold voltage, as well as its variation with temperature, are presented. Unlike in classical devices, the drain current of JNTs decreases when temperature is lowered, although the maximum transconductance increases when the temperature is lowered down to 125 K. An analytical model for the threshold voltage is proposed to explain the influence of nanowire width and doping concentration on its variation with temperature. It is shown that the wider the nanowire or the lower the doping concentration, the higher the threshold voltage variation with temperature.
Resumo:
With the aim to compare the cost of treatment for rheumatoid arthritis therapy with desease-modifying antirheumatic drugs (DMARDS) for a 48-month period, were studied five different treatment stage based on clinical protocols recommended by the Brazilian Society of Rheumatology, and then five therapy cycles. The analytical model based on the Markov Analysis, considered chaces for the patient continue in some stages or change between them according with a positive effect on outcomes. Only direct costs were comprised in the analyzed data, like drugs, materials and tests used for monitoring these patients. The results of the model show that the stage in with metotrexato drug is used like monotherapy was cost-effective (R$ 113,900,00 for patient during 48 months), followed by refractory patient (R$ 1,554,483,43), those that use therapy triplicate followed by infleximable drug (R$ 1, 701, 286.76), the metotrexato intolearant patient (R$ 2,629,919,14), and final the result from that use metotrexato and infliximable in the beginning (R$ 9,292,879,31). The sensitivity analysis confirm this results, when alternate the efficacy of metotrexato and infliximabe.
Resumo:
The eddy covariance method was used to measure energy and water balance of a plantation of Eucalyptus (grandis x urophylla) hybrids over a 2 year period. The average daily evaporation rates were 5.4 (+/- 2.0) mm day(-1) in summer, but fell to 1.2 (+/- 0.3) mm day(-1) in winter. In contrast, the sensible heat flux was relatively low in summer but dominated the energy balance in winter. Evaporation accounted for 80% and 26% of the available energy, in summer and winter respectively. The annual evaporation was 82% (1124 mm) and 96% (1235 mm) of the annual rainfall recorded during the first and second year, respectively. Daily average canopy and aerodynamic conductance to water vapour were in the summer 51.9 (+/- 38.4) mm s(-1) 84.1 (+/- 25.6) mm s(-1), respectively; and in the winter 6.0 (+/- 10.5) mm s(-1) and 111.6 (+/- 24.6) mm s(-1), respectively. (C) 2010 Elsevier B.V. All rights reserved.
Resumo:
We studied superclusters of galaxies in a volume-limited sample extracted from the Sloan Digital Sky Survey Data Release 7 and from mock catalogues based on a semi-analytical model of galaxy evolution in the Millennium Simulation. A density field method was applied to a sample of galaxies brighter than M(r) = -21+5 log h(100) to identify superclusters, taking into account selection and boundary effects. In order to evaluate the influence of the threshold density, we have chosen two thresholds: the first maximizes the number of objects (D1) and the second constrains the maximum supercluster size to similar to 120 h(-1) Mpc (D2). We have performed a morphological analysis, using Minkowski Functionals, based on a parameter, which increases monotonically from filaments to pancakes. An anticorrelation was found between supercluster richness (and total luminosity or size) and the morphological parameter, indicating that filamentary structures tend to be richer, larger and more luminous than pancakes in both observed and mock catalogues. We have also used the mock samples to compare supercluster morphologies identified in position and velocity spaces, concluding that our morphological classification is not biased by the peculiar velocities. Monte Carlo simulations designed to investigate the reliability of our results with respect to random fluctuations show that these results are robust. Our analysis indicates that filaments and pancakes present different luminosity and size distributions.
Resumo:
We study the stability regions and families of periodic orbits of two planets locked in a co-orbital configuration. We consider different ratios of planetary masses and orbital eccentricities; we also assume that both planets share the same orbital plane. Initially, we perform numerical simulations over a grid of osculating initial conditions to map the regions of stable/chaotic motion and identify equilibrium solutions. These results are later analysed in more detail using a semi-analytical model. Apart from the well-known quasi-satellite orbits and the classical equilibrium Lagrangian points L(4) and L(5), we also find a new regime of asymmetric periodic solutions. For low eccentricities these are located at (delta lambda, delta pi) = (+/- 60 degrees, -/+ 120 degrees), where delta lambda is the difference in mean longitudes and delta pi is the difference in longitudes of pericentre. The position of these anti-Lagrangian solutions changes with the mass ratio and the orbital eccentricities and are found for eccentricities as high as similar to 0.7. Finally, we also applied a slow mass variation to one of the planets and analysed its effect on an initially asymmetric periodic orbit. We found that the resonant solution is preserved as long as the mass variation is adiabatic, with practically no change in the equilibrium values of the angles.
Resumo:
Some observations of galaxies, and in particular dwarf galaxies, indicate a presence of cored density profiles in apparent contradiction with cusp profiles predicted by dark matter N-body simulations. We constructed an analytical model, using particle distribution functions (DFs), to show how a supernova (SN) explosion can transform a cusp density profile in a small-mass dark matter halo into a cored one. Considering the fact that an SN efficiently removes matter from the centre of the first haloes, we study the effect of mass removal through an SN perturbation in the DFs. We find that the transformation from a cusp into a cored profile occurs even for changes as small as 0.5 per cent of the total energy of the halo, which can be produced by the expulsion of matter caused by a single SN explosion.
Resumo:
The pervasive and ubiquitous computing has motivated researches on multimedia adaptation which aims at matching the video quality to the user needs and device restrictions. This technique has a high computational cost which needs to be studied and estimated when designing architectures and applications. This paper presents an analytical model to quantify these video transcoding costs in a hardware independent way. The model was used to analyze the impact of transcoding delays in end-to-end live-video transmissions over LANs, MANs and WANs. Experiments confirm that the proposed model helps to define the best transcoding architecture for different scenarios.
Resumo:
The evolution of the mass of a black hole embedded in a universe filled with dark energy and cold dark matter is calculated in a closed form within a test fluid model in a Schwarzschild metric, taking into account the cosmological evolution of both fluids. The result describes exactly how accretion asymptotically switches from the matter-dominated to the Lambda-dominated regime. For early epochs, the black hole mass increases due to dark matter accretion, and on later epochs the increase in mass stops as dark energy accretion takes over. Thus, the unphysical behaviour of previous analyses is improved in this simple exact model. (C) 2010 Elsevier B.V. All rights reserved.
Resumo:
In chemical analyses performed by laboratories, one faces the problem of determining the concentration of a chemical element in a sample. In practice, one deals with the problem using the so-called linear calibration model, which considers that the errors associated with the independent variables are negligible compared with the former variable. In this work, a new linear calibration model is proposed assuming that the independent variables are subject to heteroscedastic measurement errors. A simulation study is carried out in order to verify some properties of the estimators derived for the new model and it is also considered the usual calibration model to compare it with the new approach. Three applications are considered to verify the performance of the new approach. Copyright (C) 2010 John Wiley & Sons, Ltd.
Resumo:
Consider a random medium consisting of N points randomly distributed so that there is no correlation among the distances separating them. This is the random link model, which is the high dimensionality limit (mean-field approximation) for the Euclidean random point structure. In the random link model, at discrete time steps, a walker moves to the nearest point, which has not been visited in the last mu steps (memory), producing a deterministic partially self-avoiding walk (the tourist walk). We have analytically obtained the distribution of the number n of points explored by the walker with memory mu=2, as well as the transient and period joint distribution. This result enables us to explain the abrupt change in the exploratory behavior between the cases mu=1 (memoryless walker, driven by extreme value statistics) and mu=2 (walker with memory, driven by combinatorial statistics). In the mu=1 case, the mean newly visited points in the thermodynamic limit (N >> 1) is just < n >=e=2.72... while in the mu=2 case, the mean number < n > of visited points grows proportionally to N(1/2). Also, this result allows us to establish an equivalence between the random link model with mu=2 and random map (uncorrelated back and forth distances) with mu=0 and the abrupt change between the probabilities for null transient time and subsequent ones.