944 resultados para Fatigue. Composites. Modular Network. S-N Curves Probability. Weibull Distribution


Relevância:

100.00% 100.00%

Publicador:

Resumo:

Properties of cast aluminium matrix composites are greatly influenced by the nature of distribution of reinforcing phase in the matrix and matrix microstructural length scales, such as grain size, dendrite arm spacing, size and morphology of secondary matrix phases, etc. Earlier workers have shown that SIC reinforcements can act as heterogeneous nucleation sites for Si during solidification of Al-Si-SiC composites. The present study aims at a quantitative understanding of the effect of SiC reinforcements on secondary matrix phases, namely eutectic Si, during solidification of A356 Al-SiC composites. Effect of volume fraction of SiC particulate on size and shape of eutectic Si has been studied at different cooling rates. Results indicate that an increase in SiC volume fraction leads to a reduction in the size of eutectic Si and also changes its morphology from needle-like to equiaxed. This is attributed to the heterogeneous nucleation of eutectic Si on SiC particles. However, SiC particles are found to have negligible influence on DAS. Under all the solidification conditions studied in the present investigation, SiC particles are found to be rejected by the growing dendrites. (C) 1999 Elsevier Science Ltd. All rights reserved.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Because of the intrinsic difficulty in determining distributions for wave periods, previous studies on wave period distribution models have not taken nonlinearity into account and have not performed well in terms of describing and statistically analyzing the probability density distribution of ocean waves. In this study, a statistical model of random waves is developed using Stokes wave theory of water wave dynamics. In addition, a new nonlinear probability distribution function for the wave period is presented with the parameters of spectral density width and nonlinear wave steepness, which is more reasonable as a physical mechanism. The magnitude of wave steepness determines the intensity of the nonlinear effect, while the spectral width only changes the energy distribution. The wave steepness is found to be an important parameter in terms of not only dynamics but also statistics. The value of wave steepness reflects the degree that the wave period distribution skews from the Cauchy distribution, and it also describes the variation in the distribution function, which resembles that of the wave surface elevation distribution and wave height distribution. We found that the distribution curves skew leftward and upward as the wave steepness increases. The wave period observations for the SZFII-1 buoy, made off the coast of Weihai (37A degrees 27.6' N, 122A degrees 15.1' E), China, are used to verify the new distribution. The coefficient of the correlation between the new distribution and the buoy data at different spectral widths (nu=0.3-0.5) is within the range of 0.968 6 to 0.991 7. In addition, the Longuet-Higgins (1975) and Sun (1988) distributions and the new distribution presented in this work are compared. The validations and comparisons indicate that the new nonlinear probability density distribution fits the buoy measurements better than the Longuet-Higgins and Sun distributions do. We believe that adoption of the new wave period distribution would improve traditional statistical wave theory.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Nous proposons une approche probabiliste afin de déterminer l’impact des changements dans les programmes à objets. Cette approche sert à prédire, pour un changement donné dans une classe du système, l’ensemble des autres classes potentiellement affectées par ce changement. Cette prédiction est donnée sous la forme d’une probabilité qui dépend d’une part, des interactions entre les classes exprimées en termes de nombre d’invocations et d’autre part, des relations extraites à partir du code source. Ces relations sont extraites automatiquement par rétro-ingénierie. Pour la mise en oeuvre de notre approche, nous proposons une approche basée sur les réseaux bayésiens. Après une phase d’apprentissage, ces réseaux prédisent l’ensemble des classes affectées par un changement. L’approche probabiliste proposée est évaluée avec deux scénarios distincts mettant en oeuvre plusieurs types de changements effectués sur différents systèmes. Pour les systèmes qui possèdent des données historiques, l’apprentissage a été réalisé à partir des anciennes versions. Pour les systèmes dont on ne possède pas assez de données relatives aux changements de ses versions antécédentes, l’apprentissage a été réalisé à l’aide des données extraites d’autres systèmes.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The present study on the characterization of probability distributions using the residual entropy function. The concept of entropy is extensively used in literature as a quantitative measure of uncertainty associated with a random phenomenon. The commonly used life time models in reliability Theory are exponential distribution, Pareto distribution, Beta distribution, Weibull distribution and gamma distribution. Several characterization theorems are obtained for the above models using reliability concepts such as failure rate, mean residual life function, vitality function, variance residual life function etc. Most of the works on characterization of distributions in the reliability context centers around the failure rate or the residual life function. The important aspect of interest in the study of entropy is that of locating distributions for which the shannon’s entropy is maximum subject to certain restrictions on the underlying random variable. The geometric vitality function and examine its properties. It is established that the geometric vitality function determines the distribution uniquely. The problem of averaging the residual entropy function is examined, and also the truncated form version of entropies of higher order are defined. In this study it is established that the residual entropy function determines the distribution uniquely and that the constancy of the same is characteristics to the geometric distribution

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Energy storage is a potential alternative to conventional network reinforcementof the low voltage (LV) distribution network to ensure the grid’s infrastructure remainswithin its operating constraints. This paper presents a study on the control of such storagedevices, owned by distribution network operators. A deterministic model predictive control (MPC) controller and a stochastic receding horizon controller (SRHC) are presented, wherethe objective is to achieve the greatest peak reduction in demand, for a given storagedevice specification, taking into account the high level of uncertainty in the prediction of LV demand. The algorithms presented in this paper are compared to a standard set-pointcontroller and bench marked against a control algorithm with a perfect forecast. A specificcase study, using storage on the LV network, is presented, and the results of each algorithmare compared. A comprehensive analysis is then carried out simulating a large number of LV networks of varying numbers of households. The results show that the performance of each algorithm is dependent on the number of aggregated households. However, on a typical aggregation, the novel SRHC algorithm presented in this paper is shown to outperform each of the comparable storage control techniques.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Replacement and upgrading of assets in the electricity network requires financial investment for the distribution and transmission utilities. The replacement and upgrading of network assets also represents an emissions impact due to the carbon embodied in the materials used to manufacture network assets. This paper uses investment and asset data for the GB system for 2015-2023 to assess the suitability of using a proxy with peak demand data and network investment data to calculate the carbon impacts of network investments. The proxies are calculated on a regional basis and applied to calculate the embodied carbon associated with current network assets by DNO region. The proxies are also applied to peak demand data across the 2015-2023 period to estimate the expected levels of embodied carbon that will be associated with network investment during this period. The suitability of these proxies in different contexts are then discussed, along with initial scenario analysis to calculate the impact of avoiding or deferring network investments through distributed generation projects. The proxies were found to be effective in estimating the total embodied carbon of electricity system investment in order to compare investment strategies in different regions of the GB network.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Replacement, expansion and upgrading of assets in the electricity network represents financial investment for the distribution utilities. Network Investment Deferral (NID) is a well discussed benefit of wider adoption of Distributed Generation (DG). There have been many attempts to quantify and evaluate the financial benefit for the distribution utilities. While the carbon benefits of NID are commonly mentioned, there is little attempt to quantify these impacts. This paper explores the quantitative methods previously used to evaluate financial benefits in order to discuss the carbon impacts. These carbon impacts are important for companies owning DG equipment for internal reporting and emissions reductions ambitions. Currently, a GB wide approach is taken as a means for discussing more regional and local methods to be used in future work. By investigating these principles, the paper offers a novel approach to quantifying carbon emissions from various DG technologies.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

An Al6061-20%Al2O3 powder metallurgy (PM) metal matrix composite (MMC) with a strongly clustered particle distribution is subjected to equal channel angular pressing (ECAP) at a temperature of 370 °C. The evolution of the homogeneity of the particle distribution in the material during ECAP is investigated by the quadrat method. The model proposed by Tan and Zhang [Mater Sci Eng 1998;244:80] for estimating the critical particle size which is required for a homogeneous particle distribution in PM MMCs is extended to the case of a combination of extrusion and ECAP. The applicability of the model to predict a homogeneity of the particle distribution after extrusion and ECAP is discussed. It is shown that ECAP leads to an increase of the  uniformity of the particle distribution and the fracture toughness.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This work investigates the tensile behaviour of non-uniform fibres and fibrous composites. Wool fibres are used as an example of non-uniform fibres because they're physical, morphological and geometrical properties vary greatly not only between fibres but also within a fibre. The focus of this work is on the effect of both between-fibre and within-fibre diameter variations on fibre tensile behaviour. In addition, fit to the Weibull distribution by the non-brittle and non-uniform visco-elastic wool fibres is examined, and the Weibull model is developed further for non-uniform fibres with diameter variation along the fibre length. A novel model fibre composite is introduced to facilitate the investigation into the tensile behaviour of fibre-reinforced composites. This work first confirms that for processed wool, its coefficient of variation in break force can be predicted from that of minimum fibre diameters, and the prediction is better for longer fibres. This implies that even for processed wool, fibre breakage is closely associated with the occurrence of thin sections along a fibre, and damage to fibres during processing is not the main cause of fibre breakage. The effect of along-fibre diameter variation on fibre tensile behaviour of scoured wool and mohair is examined next. Only wet wool samples were examined in the past. The extensions of individual segments of single non-uniform fibres are measured at different strain levels. An important finding is the maximum extension (%) (Normally at the thinnest section) equals the average fibre extension (%) plus the diameter variation (CV %) among the fibre segments. This relationship has not been reported before. During a tensile test, it is only the average fibre extension that is measured. The third part of this work is on the applicability of Weibull distribution to the strength of non-uniform visco-elastic wool fibres. Little work has been done for wool fibres in this area, even though the Weibull model has been widely applied to many brittle fibres. An improved Weibull model incorporating within-fibre diameter variations has been developed for non-uniform fibres. This model predicts the gauge length effect more accurately than the conventional Weibull model. In studies of fibre-reinforced composites, ideal composite specimens are usually prepared and used in the experiments. Sample preparation has been a tedious process. A novel fibre reinforced composite is developed and used in this work to investigate the tensile behaviour of fibre-reinforced composites. The results obtained from the novel composite specimen are consistent with that obtained from the normal specimens.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

 Aim: We investigated how the probability of burning is influenced by the time since fire (TSF) and gradients of climate, soil and vegetation in the fire-prone mediterranean-climate mallee woodlands of south-eastern Australia. This provided insight into the processes controlling contemporary fuel dynamics and fire regimes across biogeographical boundaries, and the consequent effects of climate change on potential shifts in boundaries between fuel systems and fire regimes, at a subcontinental scale. Location: South-eastern Australia. Methods: A desktop-based GIS was used to generate random sampling points across the study region to collect data on intersecting fire interval, rainfall, vegetation and soil type. We used a Bayesian framework to examine the effects of combinations of rainfall, vegetation and soil type on the hazard-of-burning and survival parameters of the Weibull distribution. These analyses identify the nature of environmental controls on the length of fire intervals and the age-dependence of the hazard of burning. Results: Higher rainfall was consistently associated with shorter fire intervals. Within a single level of rainfall, however, the interaction between soil and vegetation type influenced the length of fire intervals. Higher-fertility sands were associated with shorter fire intervals in grass-dominated communities, whereas lower-fertility sands were associated with shorter fire intervals in shrub-dominated communities. The hazard of burning remained largely independent of TSF across the region, only markedly increasing with TSF in shrub-dominated communities at high rainfall. Main conclusions: Rainfall had a dominant influence on fire frequency in the mediterranean-climate mallee woodlands of south-eastern Australia. Predicted changes in the spatial distribution and amount of rainfall therefore have the potential to drive changes in fire regimes, although the effects of soil fertility and rainfall on fire regimes do not align on a simple productivity gradient. Reduced soil fertility may favour plant traits that increase the rate of woody litter fuel accumulation and flammability, which may alter the overriding influence of rainfall gradients on fire regimes.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This study shows the implementation and the embedding of an Artificial Neural Network (ANN) in hardware, or in a programmable device, as a field programmable gate array (FPGA). This work allowed the exploration of different implementations, described in VHDL, of multilayer perceptrons ANN. Due to the parallelism inherent to ANNs, there are disadvantages in software implementations due to the sequential nature of the Von Neumann architectures. As an alternative to this problem, there is a hardware implementation that allows to exploit all the parallelism implicit in this model. Currently, there is an increase in use of FPGAs as a platform to implement neural networks in hardware, exploiting the high processing power, low cost, ease of programming and ability to reconfigure the circuit, allowing the network to adapt to different applications. Given this context, the aim is to develop arrays of neural networks in hardware, a flexible architecture, in which it is possible to add or remove neurons, and mainly, modify the network topology, in order to enable a modular network of fixed-point arithmetic in a FPGA. Five synthesis of VHDL descriptions were produced: two for the neuron with one or two entrances, and three different architectures of ANN. The descriptions of the used architectures became very modular, easily allowing the increase or decrease of the number of neurons. As a result, some complete neural networks were implemented in FPGA, in fixed-point arithmetic, with a high-capacity parallel processing

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Currently the interest in large-scale systems with a high degree of complexity has been much discussed in the scientific community in various areas of knowledge. As an example, the Internet, protein interaction, collaboration of film actors, among others. To better understand the behavior of interconnected systems, several models in the area of complex networks have been proposed. Barabási and Albert proposed a model in which the connection between the constituents of the system could dynamically and which favors older sites, reproducing a characteristic behavior in some real systems: connectivity distribution of scale invariant. However, this model neglects two factors, among others, observed in real systems: homophily and metrics. Given the importance of these two terms in the global behavior of networks, we propose in this dissertation study a dynamic model of preferential binding to three essential factors that are responsible for competition for links: (i) connectivity (the more connected sites are privileged in the choice of links) (ii) homophily (similar connections between sites are more attractive), (iii) metric (the link is favored by the proximity of the sites). Within this proposal, we analyze the behavior of the distribution of connectivity and dynamic evolution of the network are affected by the metric by A parameter that controls the importance of distance in the preferential binding) and homophily by (characteristic intrinsic site). We realized that the increased importance as the distance in the preferred connection, the connections between sites and become local connectivity distribution is characterized by a typical range. In parallel, we adjust the curves of connectivity distribution, for different values of A, the equation P(k) = P0e

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Low flexibility and reliability in the operation of radial distribution networks make those systems be constructed with extra equipment as sectionalising switches in order to reconfigure the network, so the operation quality of the network can be improved. Thus, sectionalising switches are used for fault isolation and for configuration management (reconfiguration). Moreover, distribution systems are being impacted by the increasing insertion of distributed generators. Hence, distributed generation became one of the relevant parameters in the evaluation of systems reconfiguration. Distributed generation may affect distribution networks operation in various ways, causing noticeable impacts depending on its location. Thus, the loss allocation problem becomes more important considering the possibility of open access to the distribution networks. In this work, a graphic simulator for distribution networks with reconfiguration and loss allocation functions, is presented. Reconfiguration problem is solved through a heuristic methodology, using a robust power flow algorithm based on the current summation backward-forward technique, considering distributed generation. Four different loss allocation methods (Zbus, Direct Loss Coefficient, Substitution and Marginal Loss Coefficient) are implemented and compared. Results for a 32-bus medium voltage distribution network, are presented and discussed.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Objectives: This study evaluated the reliability and failure modes of implants with a microthreaded or smooth design at the crestal region, restored with screwed or cemented crowns. The postulated null hypothesis was that the presence of microthreads in the implant cervical region would not result in different reliability and strength to failure than smooth design, regardless of fixation method, when subjected to step-stress accelerated life-testing (SSALT) in water. Materials and methods: Eighty four dental implants (3.3 × 10 mm) were divided into four groups (n = 21) according to implant macrogeometric design at the crestal region and crown fixation method: Microthreads Screwed (MS); Smooth Screwed (SS); Microthreads Cemented (MC), and Smooth Cemented (SC). The abutments were torqued to the implants and standardized maxillary central incisor metallic crowns were cemented (MC, SC) or screwed (MS, SS) and subjected to SSALT in water. The probability of failure versus cycles (90% two-sided confidence intervals) was calculated and plotted using a power law relationship for damage accumulation. Reliability for a mission of 50,000 cycles at 150 N (90% 2-sided confidence intervals) was calculated. Differences between final failure loads during fatigue for each group were assessed by Kruskal-Wallis along with Benferroni's post hoc tests. Polarized-light and scanning electron microscopes were used for failure analyses. Results: The Beta (β) value (confidence interval range) derived from use level probability Weibull calculation of 1.30 (0.76-2.22), 1.17 (0.70-1.96), 1.12 (0.71-1.76), and 0.52 (0.30-0.89) for groups MC, SC, MS, and SS respectively, indicated that fatigue was an accelerating factor for all groups, except for SS. The calculated reliability was higher for SC (99%) compared to MC (87%). No difference was observed between screwed restorations (MS - 29%, SS - 43%). Failure involved abutment screw fracture for all groups. The cemented groups (MC, SC) presented more abutment and implant fractures. Significantly higher load to fracture values were observed for SC and MC relative to MS and SS (P < 0.001). Conclusion: Since reliability and strength to failure was higher for SC than for MC, our postulated null hypothesis was rejected. © 2012 John Wiley & Sons A/S.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In the network reconfiguration context, the challenge nowadays is to improve the system in order to get intelligent systems that are able to monitor the network and produce refined information to support the operator decisions in real time, this because the network is wide, ramified and in some places difficult to access. The objective of this paper is to present the first results of the network reconfiguration algorithm that has been developed to CEMIG-D. The algorithm's main idea is to provide a new network configuration, after an event (fault or study case), based on an initial condition and aiming to minimize the affected load, considering the restrictions of load flow equations, maximum capacity of the lines as well as equipments and substations, voltage limits and system radial operation. Initial tests were made considering real data from the system, provided by CEMIG-D and it reveals very promising results. © 2013 IEEE.