183 resultados para modelling and simulation
Resumo:
Reinforced concrete (RC) beams may be strengthened for shear using externally bonded fiber reinforced polymer (FRP) composites in the form of side bonding, U-jacketing or complete wrapping. The shear failure of almost all RC beams shear-strengthened with side bonded FRP and the majority of those strengthened with FRP U-jackets, is due to debonding of the FRP. The bond behavior between the externally-bonded FRP reinforcement (referred to as FRP strips for simplicity) and the concrete substrate therefore plays a crucial role in the failure process of these beams. Despite extensive research in the past decade, there is still a lack of understanding of how debonding of FRP strips in such a beam propagates and how the debonding process affects its shear behavior. This paper presents an analytical study on the progressive debonding of FRP strips in such strengthened beams. The complete debonding process is modeled and the contribution of the FRP strips to the shear capacity of the beam is quantified. The validity of the analytical solution is verified by comparing its predictions with numerical results from a finite element analysis. This analytical treatment represents a significant step forward in understanding how interaction between FRP strips, steel stirrups and concrete affects the shear resistance of RC beams shear-strengthened with FRP strips.
Resumo:
3C–SiC (the only polytype of SiC that resides in a diamond cubic lattice structure) is a relatively new material that exhibits most of the desirable engineering properties required for advanced electronic applications. The anisotropy exhibited by 3C–SiC during its nanometric cutting is significant, and the potential for its exploitation has yet to be fully investigated. This paper aims to understand the influence of crystal anisotropy of 3C–SiC on its cutting behaviour. A molecular dynamics simulation model was developed to simulate the nanometric cutting of single-crystal 3C–SiC in nine (9) distinct combinations of crystal orientations and cutting directions, i.e. (1?1?1) <-1?1?0>, (1?1?1) <-2?1?1>, (1?1?0) <-1?1?0>, (1?1?0) <0?0?1>, (1?1?0) <1?1?-2>, (0?0?1) <-1?1?0>, (0?0?1) <1?0?0>, (1?1?-2) <1?-1?0> and (1?-2?0) <2?1?0>.
In order to ensure the reliability of the simulation results, two separate simulation trials were carried out with different machining parameters. In the first trial, a cutting tool rake angle of -25°, d/r (uncut chip thickness/cutting edge radius) ratio of 0.57 and cutting velocity of 10 m s-1 were used whereas a second trial was done using a cutting tool rake angle of -30°, d/r ratio of 1 and cutting velocity of 4 m s-1. Both the trials showed similar anisotropic variation.
The simulated orthogonal components of thrust force in 3C–SiC showed a variation of up to 45%, while the resultant cutting forces showed a variation of 37%. This suggests that 3C–SiC is highly anisotropic in its ease of deformation. These results corroborate with the experimentally observed anisotropic variation of 43.6% in Young's modulus of 3C–SiC. The recently developed dislocation extraction algorithm (DXA) [1, 2] was employed to detect the nucleation of dislocations in the MD simulations of varying cutting orientations and cutting directions. Based on the overall analysis, it was found that 3C–SiC offers ease of deformation on either (1?1?1) <-1?1?0>, (1?1?0) <0?0?1>, or (1?0?0) <1?0?0> setups.
Resumo:
The hybrid test method is a relatively recently developed dynamic testing technique that uses numerical modelling combined with simultaneous physical testing. The concept of substructuring allows the critical or highly nonlinear part of the structure that is difficult to numerically model with accuracy to be physically tested whilst the remainder of the structure, that has a more predictable response, is numerically modelled. In this paper, a substructured soft-real time hybrid test is evaluated as an accurate means of performing seismic tests of complex structures. The structure analysed is a three-storey, two-by-one bay concentrically braced frame (CBF) steel structure subjected to seismic excitation. A ground storey braced frame substructure whose response is critical to the overall response of the structure is tested, whilst the remainder of the structure is numerically modelled. OpenSees is used for numerical modelling and OpenFresco is used for the communication between the test equipment and numerical model. A novel approach using OpenFresco to define the complex numerical substructure of an X-braced frame within a hybrid test is also presented. The results of the hybrid tests are compared to purely numerical models using OpenSees and a simulated test using a combination of OpenSees and OpenFresco. The comparative results indicate that the test method provides an accurate and cost effective procedure for performing
full scale seismic tests of complex structural systems.
Resumo:
This paper examines the ability of the doubly fed induction generator (DFIG) to deliver multiple reactive power objectives during variable wind conditions. The reactive power requirement is decomposed based on various control objectives (e.g. power factor control, voltage control, loss minimisation, and flicker mitigation) defined around different time frames (i.e. seconds, minutes, and hourly), and the control reference is generated by aggregating the individual reactive power requirement for each control strategy. A novel coordinated controller is implemented for the rotor-side converter and the grid-side converter considering their capability curves and illustrating that it can effectively utilise the aggregated DFIG reactive power capability for system performance enhancement. The performance of the multi-objective strategy is examined for a range of wind and network conditions, and it is shown that for the majority of the scenarios, more than 92% of the main control objective can be achieved while introducing the integrated flicker control scheme with the main reactive power control scheme. Therefore, optimal control coordination across the different control strategies can maximise the availability of ancillary services from DFIG-based wind farms without additional dynamic reactive power devices being installed in power networks.
Resumo:
In this paper, we investigate adaptive linear combinations of graph coloring heuristics with a heuristic modifier to address the examination timetabling problem. We invoke a normalisation strategy for each parameter in order to generalise the specific problem data. Two graph coloring heuristics were used in this study (largest degree and saturation degree). A score for the difficulty of assigning each examination was obtained from an adaptive linear combination of these two heuristics and examinations in the list were ordered based on this value. The examinations with the score value representing the higher difficulty were chosen for scheduling based on two strategies. We tested for single and multiple heuristics with and without a heuristic modifier with different combinations of weight values for each parameter on the Toronto and ITC2007 benchmark data sets. We observed that the combination of multiple heuristics with a heuristic modifier offers an effective way to obtain good solution quality. Experimental results demonstrate that our approach delivers promising results. We conclude that this adaptive linear combination of heuristics is a highly effective method and simple to implement.
Resumo:
We develop a theory for the food intake of a predator that can switch between multiple prey species. The theory addresses empirical observations of prey switching and is based on the behavioural assumption that a predator tends to continue feeding on prey that are similar to the prey it has consumed last, in terms of, e.g., their morphology, defences, location, habitat choice, or behaviour. From a predator's dietary history and the assumed similarity relationship among prey species, we derive a general closed-form multi-species functional response for describing predators switching between multiple prey species. Our theory includes the Holling type II functional response as a special case and makes consistent predictions when populations of equivalent prey are aggregated or split. An analysis of the derived functional response enables us to highlight the following five main findings. (1) Prey switching leads to an approximate power-law relationship between ratios of prey abundance and prey intake, consistent with experimental data. (2) In agreement with empirical observations, the theory predicts an upper limit of 2 for the exponent of such power laws. (3) Our theory predicts deviations from power-law switching at very low and very high prey-abundance ratios. (4) The theory can predict the diet composition of a predator feeding on multiple prey species from diet observations for predators feeding only on pairs of prey species. (5) Predators foraging on more prey species will show less pronounced prey switching than predators foraging on fewer prey species, thus providing a natural explanation for the known difficulties of observing prey switching in the field. (C) 2013 Elsevier Ltd. All rights reserved.
Resumo:
We present a study of the behavior of two different figures of merit for quantum correlations, entanglement of formation and quantum discord, under quantum channels showing how the former can, counterintuitively, be more resilient to such environments spoiling effects. By exploiting strict conservation relations between the two measures and imposing necessary constraints on the initial conditions we are able to explicitly show this predominance is related to build-up of the system-environment correlations.
Resumo:
This paper describes an implementation of the popular method of Class-Shape Transformation for aerofoil design within SU2 software framework. To exploit the adjoint based methods for aerodynamic optimisation within the SU2, a formulation to obtain geometric sensitivities from the new parameterisation is introduced, enabling the calculation of gradients with respect to new design variables. To assess the accuracy and efficiency of the alternative approach, two transonic optimisation problems are investigated: an inviscid problem with multiple constraints and a viscous problems without any constraints. Results show the new parameterisation obtaining reliable optimums, with similar levels of
performance of the software native parameterisations.
Resumo:
We propose a methodology for optimizing the execution of data parallel (sub-)tasks on CPU and GPU cores of the same heterogeneous architecture. The methodology is based on two main components: i) an analytical performance model for scheduling tasks among CPU and GPU cores, such that the global execution time of the overall data parallel pattern is optimized; and ii) an autonomic module which uses the analytical performance model to implement the data parallel computations in a completely autonomic way, requiring no programmer intervention to optimize the computation across CPU and GPU cores. The analytical performance model uses a small set of simple parameters to devise a partitioning-between CPU and GPU cores-of the tasks derived from structured data parallel patterns/algorithmic skeletons. The model takes into account both hardware related and application dependent parameters. It computes the percentage of tasks to be executed on CPU and GPU cores such that both kinds of cores are exploited and performance figures are optimized. The autonomic module, implemented in FastFlow, executes a generic map (reduce) data parallel pattern scheduling part of the tasks to the GPU and part to CPU cores so as to achieve optimal execution time. Experimental results on state-of-the-art CPU/GPU architectures are shown that assess both performance model properties and autonomic module effectiveness. © 2013 IEEE.
Resumo:
Many modeling problems require to estimate a scalar output from one or more time series. Such problems are usually tackled by extracting a fixed number of features from the time series (like their statistical moments), with a consequent loss in information that leads to suboptimal predictive models. Moreover, feature extraction techniques usually make assumptions that are not met by real world settings (e.g. uniformly sampled time series of constant length), and fail to deliver a thorough methodology to deal with noisy data. In this paper a methodology based on functional learning is proposed to overcome the aforementioned problems; the proposed Supervised Aggregative Feature Extraction (SAFE) approach allows to derive continuous, smooth estimates of time series data (yielding aggregate local information), while simultaneously estimating a continuous shape function yielding optimal predictions. The SAFE paradigm enjoys several properties like closed form solution, incorporation of first and second order derivative information into the regressor matrix, interpretability of the generated functional predictor and the possibility to exploit Reproducing Kernel Hilbert Spaces setting to yield nonlinear predictive models. Simulation studies are provided to highlight the strengths of the new methodology w.r.t. standard unsupervised feature selection approaches. © 2012 IEEE.
Resumo:
The economical and environmental benefits are the central issues for remanufacturing. Whereas extant remanufacturing research focuses primarily on such issues in remanufacturing technologies, production planning, inventory control and competitive strategies, we provide an alternative yet somewhat complementary approach to consider both issues related to different channels structures for marketing remanufactured products. Specifically, based on observations from current practice, we consider a manufacturer sells new units through an independent retailer but with two options for marketing remanufactured products: (1) marketing through its own e-channel (Model M) or (2) subcontracting the marketing activity to a third party (Model 3P). A central result we obtain is that although Model M is always greener than Model 3P, firms have less incentive to adopt it because both the manufacturer and retailer may be worse off when the manufacturer sells remanufactured products through its own e-channel rather than subcontracting to a third party. Extending both models to cases in which the manufacturer interacts with multiple retailers further reveals that the more retailers in the market, the greener Model M relative to Model 3P.
Resumo:
As a post-CMOS technology, the incipient Quantum-dot Cellular Automata technology has various advantages. A key aspect which makes it highly desirable is low power dissipation. One method that is used to analyse power dissipation in QCA circuits is bit erasure analysis. This method has been applied to analyse previously proposed QCA binary adders. However, a number of improved QCA adders have been proposed more recently that have only been evaluated in terms of area and speed. As the three key performance metrics for QCA circuits are speed, area and power, in this paper, a bit erasure analysis of these adders will be presented to determine their power dissipation. The adders to be analysed are the Carry Flow Adder (CFA), Brent-Kung Adder (B-K), Ladner-Fischer Adder (L-F) and a more recently developed area-delay efficient adder. This research will allow for a more comprehensive comparison between the different QCA adder proposals. To the best of the authors' knowledge, this is the first time power dissipation analysis has been carried out on these adders.
Resumo:
Power has become a key constraint in current nanoscale integrated circuit design due to the increasing demands for mobile computing and a low carbon economy. As an emerging technology, an inexact circuit design offers a promising approach to significantly reduce both dynamic and static power dissipation for error tolerant applications. Although fixed-point arithmetic circuits have been studied in terms of inexact computing, floating-point arithmetic circuits have not been fully considered although require more power. In this paper, the first inexact floating-point adder is designed and applied to high dynamic range (HDR) image processing. Inexact floating-point adders are proposed by approximately designing an exponent subtractor and mantissa adder. Related logic operations including normalization and rounding modules are also considered in terms of inexact computing. Two HDR images are processed using the proposed inexact floating-point adders to show the validity of the inexact design. HDR-VDP is used as a metric to measure the subjective results of the image addition. Significant improvements have been achieved in terms of area, delay and power consumption. Comparison results show that the proposed inexact floating-point adders can improve power consumption and the power-delay product by 29.98% and 39.60%, respectively.