973 resultados para Geometric Approach


Relevância:

30.00% 30.00%

Publicador:

Resumo:

Efficient and accurate geometric and material nonlinear analysis of the structures under ultimate loads is a backbone to the success of integrated analysis and design, performance-based design approach and progressive collapse analysis. This paper presents the advanced computational technique of a higher-order element formulation with the refined plastic hinge approach which can evaluate the concrete and steel-concrete structure prone to the nonlinear material effects (i.e. gradual yielding, full plasticity, strain-hardening effect when subjected to the interaction between axial and bending actions, and load redistribution) as well as the nonlinear geometric effects (i.e. second-order P-d effect and P-D effect, its associate strength and stiffness degradation). Further, this paper also presents the cross-section analysis useful to formulate the refined plastic hinge approach.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The computational technique of the full ranges of the second-order inelastic behaviour evaluation of steel-concrete composite structure is not always sought forgivingly, and therefore it hinders the development and application of the performance-based design approach for the composite structure. To this end, this paper addresses of the advanced computational technique of the higher-order element with the refined plastic hinges to capture the all-ranges behaviour of an entire steel-concrete composite structure. Moreover, this paper presents the efficient and economical cross-section analysis to evaluate the element section capacity of the non-uniform and arbitrary composite section subjected to the axial and bending interaction. Based on the same single algorithm, it can accurately and effectively evaluate nearly continuous interaction capacity curve from decompression to pure bending technically, which is the important capacity range but highly nonlinear. Hence, this cross-section analysis provides the simple but unique algorithm for the design approach. In summary, the present nonlinear computational technique can simulate both material and geometric nonlinearities of the composite structure in the accurate, efficient and reliable fashion, including partial shear connection and gradual yielding at pre-yield stage, plasticity and strain-hardening effect due to axial and bending interaction at post-yield stage, loading redistribution, second-order P-δ and P-Δ effect, and also the stiffness and strength deterioration. And because of its reliable and accurate behavioural evaluation, the present technique can be extended for the design of the high-strength composite structure and potentially for the fibre-reinforced concrete structure.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Recent axiomatic derivations of the maximum entropy principle from consistency conditions are critically examined. We show that proper application of consistency conditions alone allows a wider class of functionals, essentially of the form ∝ dx p(x)[p(x)/g(x)] s , for some real numbers, to be used for inductive inference and the commonly used form − ∝ dx p(x)ln[p(x)/g(x)] is only a particular case. The role of the prior densityg(x) is clarified. It is possible to regard it as a geometric factor, describing the coordinate system used and it does not represent information of the same kind as obtained by measurements on the system in the form of expectation values.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this paper, a relative velocity approach is used to analyze the capturability of a geometric guidance law. Point mass models are assumed for both the missile and the target. The speeds of the missile and target are assumed to remain constant throughout the engagement. Lateral acceleration, obtained from the guidance law, is applied to change the path of the missile. The kinematic equations for engagements in the horizontal plane are derived in the relative velocity space. Some analytical results for the capture region are obtained for non-maneuvering and maneuvering targets. For non-maneuvering targets it is enough for the navigation gain to be a constant to intercept the target, while for maneuvering targets a time varying navigation gain is needed for interception. These results are then verified through numerical simulations.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Radiation therapy (RT) plays currently significant role in curative treatments of several cancers. External beam RT is carried out mostly by using megavoltage beams of linear accelerators. Tumor eradication and normal tissue complications correlate to dose absorbed in tissues. Normally this dependence is steep and it is crucial that actual dose within patient accurately correspond to the planned dose. All factors in a RT procedure contain uncertainties requiring strict quality assurance. From hospital physicist´s point of a view, technical quality control (QC), dose calculations and methods for verification of correct treatment location are the most important subjects. Most important factor in technical QC is the verification that radiation production of an accelerator, called output, is within narrow acceptable limits. The output measurements are carried out according to a locally chosen dosimetric QC program defining measurement time interval and action levels. Dose calculation algorithms need to be configured for the accelerators by using measured beam data. The uncertainty of such data sets limits for best achievable calculation accuracy. All these dosimetric measurements require good experience, are workful, take up resources needed for treatments and are prone to several random and systematic sources of errors. Appropriate verification of treatment location is more important in intensity modulated radiation therapy (IMRT) than in conventional RT. This is due to steep dose gradients produced within or close to healthy tissues locating only a few millimetres from the targeted volume. The thesis was concentrated in investigation of the quality of dosimetric measurements, the efficacy of dosimetric QC programs, the verification of measured beam data and the effect of positional errors on the dose received by the major salivary glands in head and neck IMRT. A method was developed for the estimation of the effect of the use of different dosimetric QC programs on the overall uncertainty of dose. Data were provided to facilitate the choice of a sufficient QC program. The method takes into account local output stability and reproducibility of the dosimetric QC measurements. A method based on the model fitting of the results of the QC measurements was proposed for the estimation of both of these factors. The reduction of random measurement errors and optimization of QC procedure were also investigated. A method and suggestions were presented for these purposes. The accuracy of beam data was evaluated in Finnish RT centres. Sufficient accuracy level was estimated for the beam data. A method based on the use of reference beam data was developed for the QC of beam data. Dosimetric and geometric accuracy requirements were evaluated for head and neck IMRT when function of the major salivary glands is intended to be spared. These criteria are based on the dose response obtained for the glands. Random measurement errors could be reduced enabling lowering of action levels and prolongation of measurement time interval from 1 month to even 6 months simultaneously maintaining dose accuracy. The combined effect of the proposed methods, suggestions and criteria was found to facilitate the avoidance of maximal dose errors of up to even about 8 %. In addition, their use may make the strictest recommended overall dose accuracy level of 3 % (1SD) achievable.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A methodology for reliability based optimum design of reinforced soil structures subjected to horizontal and vertical sinusoidal excitation based on pseudo-dynamic approach is presented. The tensile strength of reinforcement required to maintain the stability is computed using logarithmic spiral failure mechanism. The backfill soil properties, geometric and strength properties of reinforcement are treated as random variables. Effects of parameters like soil friction angle, horizontal and vertical seismic accelerations, shear and primary wave velocities, amplification factors for seismic acceleration on the component and system probability of failures in relation to tension and pullout capacities of reinforcement have been discussed. In order to evaluate the validity of the present formulation, static and seismic reinforcement force coefficients computed by the present method are compared with those given by other authors. The importance of the shear wave velocity in the estimation of the reliability of the structure is highlighted. The Ditlevsen's bounds of system probability of failure are also computed by taking into account the correlations between three failure modes, which is evaluated using the direction cosines of the tangent planes at the most probable points of failure. (c) 2009 Elsevier Ltd. All rights reserved.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper presents an algorithm for solid model reconstruction from 2D sectional views based on volume-based approach. None of the existing work in automatic reconstruction from 2D orthographic views have addressed sectional views in detail. It is believed that the volume-based approach is better suited to handle different types of sectional views. The volume-based approach constructs the 3D solid by a boolean combination of elementary solids. The elementary solids are formed by sweep operation on loops identified in the input views. The only adjustment to be made for the presence of sectional views is in the identification of loops that would form the elemental solids. In the algorithm, the conventions of engineering drawing for sectional views, are used to identify the loops correctly. The algorithm is simple and intuitive in nature. Results have been obtained for full sections, offset sections and half sections. Future work will address other types of sectional views such as removed and revolved sections and broken-out sections. (C) 2004 Elsevier Ltd. All rights reserved.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We propose a method to compute a probably approximately correct (PAC) normalized histogram of observations with a refresh rate of Theta(1) time units per histogram sample on a random geometric graph with noise-free links. The delay in computation is Theta(root n) time units. We further extend our approach to a network with noisy links. While the refresh rate remains Theta(1) time units per sample, the delay increases to Theta(root n log n). The number of transmissions in both cases is Theta(n) per histogram sample. The achieved Theta(1) refresh rate for PAC histogram computation is a significant improvement over the refresh rate of Theta(1/log n) for histogram computation in noiseless networks. We achieve this by operating in the supercritical thermodynamic regime where large pathways for communication build up, but the network may have more than one component. The largest component however will have an arbitrarily large fraction of nodes in order to enable approximate computation of the histogram to the desired level of accuracy. Operation in the supercritical thermodynamic regime also reduces energy consumption. A key step in the proof of our achievability result is the construction of a connected component having bounded degree and any desired fraction of nodes. This construction may also prove useful in other communication settings on the random geometric graph.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Wireless sensor networks can often be viewed in terms of a uniform deployment of a large number of nodes on a region in Euclidean space, e.g., the unit square. After deployment, the nodes self-organise into a mesh topology. In a dense, homogeneous deployment, a frequently used approximation is to take the hop distance between nodes to be proportional to the Euclidean distance between them. In this paper, we analyse the performance of this approximation. We show that nodes with a certain hop distance from a fixed anchor node lie within a certain annulus with probability approach- ing unity as the number of nodes n → ∞. We take a uniform, i.i.d. deployment of n nodes on a unit square, and consider the geometric graph on these nodes with radius r(n) = c q ln n n . We show that, for a given hop distance h of a node from a fixed anchor on the unit square,the Euclidean distance lies within [(1−ǫ)(h−1)r(n), hr(n)],for ǫ > 0, with probability approaching unity as n → ∞.This result shows that it is more likely to expect a node, with hop distance h from the anchor, to lie within this an- nulus centred at the anchor location, and of width roughly r(n), rather than close to a circle whose radius is exactly proportional to h. We show that if the radius r of the ge- ometric graph is fixed, the convergence of the probability is exponentially fast. Similar results hold for a randomised lattice deployment. We provide simulation results that il- lustrate the theory, and serve to show how large n needs to be for the asymptotics to be useful.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We propose a distribution-free approach to the study of random geometric graphs. The distribution of vertices follows a Poisson point process with intensity function n f(center dot), where n is an element of N, and f is a probability density function on R-d. A vertex located at x connects via directed edges to other vertices that are within a cut-off distance r(n)(x). We prove strong law results for (i) the critical cut-off function so that almost surely, the graph does not contain any node with out-degree zero for sufficiently large n and (ii) the maximum and minimum vertex degrees. We also provide a characterization of the cut-off function for which the number of nodes with out-degree zero converges in distribution to a Poisson random variable. We illustrate this result for a class of densities with compact support that have at most polynomial rates of decay to zero. Finally, we state a sufficient condition for an enhanced version of the above graph to be almost surely connected eventually.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The curvature related locking phenomena in the out-of-plane deformation of Timoshenko and Euler-Bernoulli curved beam elements are demonstrated and a novel approach is proposed to circumvent them. Both flexure and Torsion locking phenomena are noticed in Timoshenko beam and torsion locking phenomenon alone in Euler-Bernoulli beam. Two locking-free curved beam finite element models are developed using coupled polynomial displacement field interpolations to eliminate these locking effects. The coupled polynomial interpolation fields are derived independently for Timoshenko and Euler-Bernoulli beam elements using the governing equations. The presented of penalty terms in the couple displacement fields incorporates the flexure-torsion coupling and flexure-shear coupling effects in an approximate manner and produce no spurious constraints in the extreme geometric limits of flexure, torsion and shear stiffness. the proposed couple polynomial finite element models, as special cases, reduce to the conventional Timoshenko beam element and Euler-Bernoulli beam element, respectively. These models are shown to perform consistently over a wide range of flexure-to-shear (EI/GA) and flexure-to-torsion (EI/GJ) stiffness ratios and are inherently devoid of flexure, torsion and shear locking phenomena. The efficacy, accuracy and reliability of the proposed models to straight and curved beam applications are demonstrated through numerical examples. (C) 2012 Elsevier B.V. All rights reserved.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Adhesives are widely used to execute the assembly of aerospace and automotive structures due to their ability to join dissimilar materials, reduced stress concentration, and improved fatigue resistance. The mechanical behavior of adhesive joints can be studied either using analytical models or by conducting mechanical tests. However, the complexity owing to multiple interfaces, layers with different properties, material and geometric nonlinearity and its three-dimensional nature combine to increase the difficulty in obtaining an overall system of governing equations to predict the joint behavior. On the other hand, experiments are often time consuming and expensive due to a number of parameters involved. Finite element analysis (FEA) is profoundly used in recent years to overcome these limitations. The work presented in this paper involves the finite element modeling and analysis of a composite single lap joint where the adhesive-adherend interface region was modeled using connector elements. The computed stresses were compared with the experimental stresses obtained using digital image correlation technique. The results showed an agreement. Further, the failure load predicted using FEA was found to be closer to the actual failure load obtained by mechanical tests.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We quantize the space of 2-charge fuzzballs in IIB supergravity on K3. The resulting entropy precisely matches the D1-D5 black hole entropy, including a specific numerical coefficient. A partial match (ie., a smaller coefficient) was found by Rychkov a decade ago using the Lunin-Mathur subclass of solutions - we use a simple observation to generalize his approach to the full moduli space of K3 fuzzballs, filling a small gap in the literature.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The optimal power-delay tradeoff is studied for a time-slotted independently and identically distributed fading point-to-point link, with perfect channel state information at both transmitter and receiver, and with random packet arrivals to the transmitter queue. It is assumed that the transmitter can control the number of packets served by controlling the transmit power in the slot. The optimal tradeoff between average power and average delay is analyzed for stationary and monotone transmitter policies. For such policies, an asymptotic lower bound on the minimum average delay of the packets is obtained, when average transmitter power approaches the minimum average power required for transmitter queue stability. The asymptotic lower bound on the minimum average delay is obtained from geometric upper bounds on the stationary distribution of the queue length. This approach, which uses geometric upper bounds, also leads to an intuitive explanation of the asymptotic behavior of average delay. The asymptotic lower bounds, along with previously known asymptotic upper bounds, are used to identify three new cases where the order of the asymptotic behavior differs from that obtained from a previously considered approximate model, in which the transmit power is a strictly convex function of real valued service batch size for every fade state.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

DNA microarrays provide such a huge amount of data that unsupervised methods are required to reduce the dimension of the data set and to extract meaningful biological information. This work shows that Independent Component Analysis (ICA) is a promising approach for the analysis of genome-wide transcriptomic data. The paper first presents an overview of the most popular algorithms to perform ICA. These algorithms are then applied on a microarray breast-cancer data set. Some issues about the application of ICA and the evaluation of biological relevance of the results are discussed. This study indicates that ICA significantly outperforms Principal Component Analysis (PCA).