900 resultados para Numerical Schemes
                                
Resumo:
Centralized and Distributed methods are two connection management schemes in wavelength convertible optical networks. In the earlier work, the centralized scheme is said to have lower network blocking probability than the distributed one. Hence, much of the previous work in connection management has focused on the comparison of different algorithms in only distributed scheme or in only centralized scheme. However, we believe that the network blocking probability of these two connection management schemes depends, to a great extent, on the network traffic patterns and reservation times. Our simulation results reveal that the performance improvement (in terms of blocking probability) of centralized method over distributed method is inversely proportional to the ratio of average connection interarrival time to reservation time. After that ratio increases beyond a threshold, those two connection management schemes yield almost the same blocking probability under the same network load. In this paper, we review the working procedure of distributed and centralized schemes, discuss the tradeoff between them, compare these two methods under different network traffic patterns via simulation and give our conclusion based on the simulation data.
                                
Resumo:
Wireless sensor networks are promising solutions for many applications. However, wireless sensor nodes suffer from many constraints such as low computation capability, small memory, limited energy resources, and so on. Grouping is an important technique to localize computation and reduce communication overhead in wireless sensor networks. In this paper, we use grouping to refer to the process of combining a set of sensor nodes with similar properties. We propose two centralized group rekeying (CGK) schemes for secure group communication in sensor networks. The lifetime of a group is divided into three phases, i.e., group formation, group maintenance, and group dissolution. We demonstrate how to set up the group and establish the group key in each phase. Our analysis shows that the proposed two schemes are computationally efficient and secure.
                                
Resumo:
Survivable traffic grooming (STG) is a promising approach to provide reliable and resource-efficient multigranularity connection services in wavelength division multiplexing (WDM) optical networks. In this paper, we study the STG problem in WDM mesh optical networks employing path protection at the connection level. Both dedicated protection and shared protection schemes are considered. Given the network resources, the objective of the STG problem is to maximize network throughput. To enable survivability under various kinds of single failures such as fiber cut and duct cut, we consider the general shared risk link group (SRLG) diverse routing constraints. We first resort to the integer linear programming (ILP) approach to obtain optimal solutions. To address its high computational complexity, we then propose three efficient heuristics, namely separated survivable grooming algorithm (SSGA), integrated survivable grooming algorithm (ISGA) and tabu search survivable grooming algorithm (TSGA). While SSGA and ISGA correspond to an overlay network model and a peer network model respectively, TSGA further improves the grooming results from SSGA and ISGA by incorporating the effective tabu search method. Numerical results show that the heuristics achieve comparable solutions to the ILP approach, which uses significantly longer running times than the heuristics.
                                
Resumo:
Traffic grooming in optical WDM mesh networks is a two-layer routing problem to effectively pack low-rate connections onto high-rate lightpaths, which, in turn, are established on wavelength links. In this work, we employ the rerouting approach to improve the network throughput under the dynamic traffic model. We propose two rerouting schemes, rerouting at lightpath level (RRAL) and rerouting at connection level (RRAC). A qualitative comparison is made between RRAL and RRAC. We also propose the critical-wavelength-avoiding one-lightpath-limited (CWA-1L) and critical-lightpath-avoiding one-connection-limited (CLA-1C) rerouting heuristics, which are based on the two rerouting schemes respectively. Simulation results show that rerouting reduces the connection blocking probability significantly.
                                
Resumo:
Composites are engineered materials that take advantage of the particular properties of each of its two or more constituents. They are designed to be stronger, lighter and to last longer which can lead to the creation of safer protection gear, more fuel efficient transportation methods and more affordable materials, among other examples. This thesis proposes a numerical and analytical verification of an in-house developed multiscale model for predicting the mechanical behavior of composite materials with various configurations subjected to impact loading. This verification is done by comparing the results obtained with analytical and numerical solutions with the results found when using the model. The model takes into account the heterogeneity of the materials that can only be noticed at smaller length scales, based on the fundamental structural properties of each of the composite’s constituents. This model can potentially reduce or eliminate the need of costly and time consuming experiments that are necessary for material characterization since it relies strictly upon the fundamental structural properties of each of the composite’s constituents. The results from simulations using the multiscale model were compared against results from direct simulations using over-killed meshes, which considered all heterogeneities explicitly in the global scale, indicating that the model is an accurate and fast tool to model composites under impact loads. Advisor: David H. Allen
                                
Resumo:
Due to the lack of optical random access memory, optical fiber delay line (FDL) is currently the only way to implement optical buffering. Feed-forward and feedback are two kinds of FDL structures in optical buffering. Both have advantages and disadvantages. In this paper, we propose a more effective hybrid FDL architecture that combines the merits of both schemes. The core of this switch is the arrayed waveguide grating (AWG) and the tunable wavelength converter (TWC). It requires smaller optical device sizes and fewer wavelengths and has less noise than feedback architecture. At the same time, it can facilitate preemptive priority routing which feed-forward architecture cannot support. Our numerical results show that the new switch architecture significantly reduces packet loss probability.
                                
                                
Resumo:
In this work, different methods to estimate the value of thin film residual stresses using instrumented indentation data were analyzed. This study considered procedures proposed in the literature, as well as a modification on one of these methods and a new approach based on the effect of residual stress on the value of hardness calculated via the Oliver and Pharr method. The analysis of these methods was centered on an axisymmetric two-dimensional finite element model, which was developed to simulate instrumented indentation testing of thin ceramic films deposited onto hard steel substrates. Simulations were conducted varying the level of film residual stress, film strain hardening exponent, film yield strength, and film Poisson's ratio. Different ratios of maximum penetration depth h(max) over film thickness t were also considered, including h/t = 0.04, for which the contribution of the substrate in the mechanical response of the system is not significant. Residual stresses were then calculated following the procedures mentioned above and compared with the values used as input in the numerical simulations. In general, results indicate the difference that each method provides with respect to the input values depends on the conditions studied. The method by Suresh and Giannakopoulos consistently overestimated the values when stresses were compressive. The method provided by Wang et al. has shown less dependence on h/t than the others.
                                
Resumo:
In the past few decades detailed observations of radio and X-ray emission from massive binary systems revealed a whole new physics present in such systems. Both thermal and non-thermal components of this emission indicate that most of the radiation at these bands originates in shocks. O and B-type stars and WolfRayet (WR) stars present supersonic and massive winds that, when colliding, emit largely due to the freefree radiation. The non-thermal radio and X-ray emissions are due to synchrotron and inverse Compton processes, respectively. In this case, magnetic fields are expected to play an important role in the emission distribution. In the past few years the modelling of the freefree and synchrotron emissions from massive binary systems have been based on purely hydrodynamical simulations, and ad hoc assumptions regarding the distribution of magnetic energy and the field geometry. In this work we provide the first full magnetohydrodynamic numerical simulations of windwind collision in massive binary systems. We study the freefree emission characterizing its dependence on the stellar and orbital parameters. We also study self-consistently the evolution of the magnetic field at the shock region, obtaining also the synchrotron energy distribution integrated along different lines of sight. We show that the magnetic field in the shocks is larger than that obtained when the proportionality between B and the plasma density is assumed. Also, we show that the role of the synchrotron emission relative to the total radio emission has been underestimated.
                                
Resumo:
The use of antiretroviral therapy has proven to be remarkably effective in controlling the progression of human immunodeficiency virus (HIV) infection and prolonging patient's survival. Therapy however may fail and therefore these benefits can be compromised by the emergence of HIV strains that are resistant to the therapy. In view of these facts, the question of finding the reason for which drug-resistant strains emerge during therapy has become a worldwide problem of great interest. This paper presents a deterministic HIV-1 model to examine the mechanisms underlying the emergence of drug-resistance during therapy. The aim of this study is to determine whether, and how fast, antiretroviral therapy may determine the emergence of drug resistance by calculating the basic reproductive numbers. The existence, feasibility and local stability of the equilibriums are also analyzed. By performing numerical simulations we show that Hopf bifurcation may occur. The model suggests that the individuals with drug-resistant infection may play an important role in the epidemic of HIV. (C) 2011 Elsevier Ireland Ltd. All rights reserved.
                                
Resumo:
In this paper, a new family of survival distributions is presented. It is derived by considering that the latent number of failure causes follows a Poisson distribution and the time for these causes to be activated follows an exponential distribution. Three different activation schemes are also considered. Moreover, we propose the inclusion of covariates in the model formulation in order to study their effect on the expected value of the number of causes and on the failure rate function. Inferential procedure based on the maximum likelihood method is discussed and evaluated via simulation. The developed methodology is illustrated on a real data set on ovarian cancer.
                                
Resumo:
Paulo CA, Roschel H, Ugrinowitsch C, Kobal R and Tricoli V. Influence of different resistance exercise loading schemes on mechanical power output in work to rest ratio-equated and -nonequated conditions. J Strength Cond Res 26(5): 1308-1312, 2012-It is well known that most sports are characterized by the performance of intermittent high-intensity actions, requiring high muscle power production within different intervals. In fact, the manipulation of the exercise to rest ratio in muscle power training programs may constitute an interesting strategy when considering the specific performance demand of a given sport modality. Thus, the aim of this study was to evaluate the influence of different schemes of rest intervals and number of repetitions per set on muscle power production in the squat exercise between exercise to rest ratio-equated and -nonequated conditions. Nineteen young males (age: 25.7 +/- 4.4 years; weight: 81.3 +/- 13.7 kg; height: 178.1 +/- 5.5 cm) were randomly submitted to 3 different resistance exercise loading schemes, as follows: short-set short-interval condition (SSSI; 12 sets of 3 repetitions with a 27.3-second interval between sets); short-set long-interval condition (SSLI; 12 sets of 3 repetitions with a 60-second interval between sets); long-set long-interval (LSLI; 6 sets of 6 repetitions with a 60-second rest interval between sets). The main finding of the present study is that the lower exercise to rest ratio protocol (SSLI) resulted in greater average power production (601.88 +/- 142.48 W) when compared with both SSSI and LSLI (581.86 +/- 113.18 W; 578 +/- 138.78 W, respectively). Additionally, both the exercise to rest ratio-equated conditions presented similar performance and metabolic results. In summary, these findings suggest that shorter rest intervals may fully restore the individual's ability to produce muscle power if a smaller exercise volume per set is performed and that lower exercise to rest ratio protocols result in greater average power production when compared with higher ratio ones.
                                
Resumo:
This paper shows in detail the modelling of anisotropic polymeric foam under compression and tension loadings, including discussions on isotropic material models and the entire procedure to calibrate the parameters involved. First, specimens of poly(vinyl chloride) (PVC) foam were investigated through experimental analyses in order to understand the mechanical behavior of this anisotropic material. Then, isotropic material models available in the commercial software Abaqus (TM) were investigated in order to verify their ability to model anisotropic foams and how the parameters involved can influence the results. Due to anisotropy, it is possible to obtain different values for the same parameter in the calibration process. The obtained set of parameters are used to calibrate the model according to the application of the structure. The models investigated showed minor and major limitations to simulate the mechanical behavior of anisotropic PVC foams under compression, tension and multi-axial loadings. Results show that the calibration process and the choice of the material model applied to the polymeric foam can provide good quantitative results and save project time. Results also indicate what kind and order of error one will get if certain choices are made throughout the modelling process. Finally, even though the developed calibration procedure is applied to specific PVC foam, it still outlines a very broad drill to analyze other anisotropic cellular materials.
                                
Resumo:
This paper deals with the numerical solution of complex fluid dynamics problems using a new bounded high resolution upwind scheme (called SDPUS-C1 henceforth), for convection term discretization. The scheme is based on TVD and CBC stability criteria and is implemented in the context of the finite volume/difference methodologies, either into the CLAWPACK software package for compressible flows or in the Freeflow simulation system for incompressible viscous flows. The performance of the proposed upwind non-oscillatory scheme is demonstrated by solving two-dimensional compressible flow problems, such as shock wave propagation and two-dimensional/axisymmetric incompressible moving free surface flows. The numerical results demonstrate that this new cell-interface reconstruction technique works very well in several practical applications. (C) 2012 Elsevier Inc. All rights reserved.
                                
Resumo:
This paper addresses the numerical solution of random crack propagation problems using the coupling boundary element method (BEM) and reliability algorithms. Crack propagation phenomenon is efficiently modelled using BEM, due to its mesh reduction features. The BEM model is based on the dual BEM formulation, in which singular and hyper-singular integral equations are adopted to construct the system of algebraic equations. Two reliability algorithms are coupled with BEM model. The first is the well known response surface method, in which local, adaptive polynomial approximations of the mechanical response are constructed in search of the design point. Different experiment designs and adaptive schemes are considered. The alternative approach direct coupling, in which the limit state function remains implicit and its gradients are calculated directly from the numerical mechanical response, is also considered. The performance of both coupling methods is compared in application to some crack propagation problems. The investigation shows that direct coupling scheme converged for all problems studied, irrespective of the problem nonlinearity. The computational cost of direct coupling has shown to be a fraction of the cost of response surface solutions, regardless of experiment design or adaptive scheme considered. (C) 2012 Elsevier Ltd. All rights reserved.
 
                    