948 resultados para Conventional techniques
Resumo:
Carbon Fiber Reinforced Plastic composites were fabricated through vacuum resin infusion technology by adopting two different processing conditions, viz., vacuum only in the first and vacuum plus external pressure in the next, in order to generate two levels of void-bearing samples. They were relatively graded as higher and lower void-bearing ones, respectively. Microscopy and C-scan techniques were utilized to describe the presence of voids arising from the two different processing parameters. Further, to determine the influence of voids on impact behavior, the fabricated +45 degrees/90 degrees/-45 degrees composite samples were subjected to low velocity impacts. The tests show impact properties like peak load and energy to peak load registering higher values for the lower void-bearing case where as the total energy, energy for propagation and ductility indexes were higher for the higher void-bearing ones. Fractographic analysis showed that higher void-bearing samples display lower number of separation of layers in the laminate. These and other results are described and discussed in this report.
Resumo:
The problem of modelling the transient response of an elastic-perfectly-plastic cantilever beam, carrying an impulsively loaded tip mass, is,often referred to as the Parkes cantilever problem 25]; The permanent deformation of a cantilever struck transversely at its tip, Proc. R. Soc. A., 288, pp. 462). This paradigm for classical modelling of projectile impact on structures is re-visited and updated using the mesh-free method, smoothed particle hydrodynamics (SPH). The purpose of this study is to investigate further the behaviour of cantilever beams subjected to projectile impact at its tip, by considering especially physically real effects such as plastic shearing close to the projectile, shear deformation, and the variation of the shear strain along the length and across the thickness of the beam. Finally, going beyond macroscopic structural plasticity, a strategy to incorporate physical discontinuity (due to crack formation) in SPH discretization is discussed and explored in the context of tip-severance of the cantilever beam. Consequently, the proposed scheme illustrates the potency for a more refined treatment of penetration mechanics, paramount in the exploration of structural response under ballistic loading. The objective is to contribute to formulating a computational modelling framework within which transient dynamic plasticity and even penetration/failure phenomena for a range of materials, structures and impact conditions can be explored ab initio, this being essential for arriving at suitable tools for the design of armour systems. (C) 2014 Elsevier Ltd. All rights reserved.
Resumo:
Lead-carbon hybrid ultracapacitors comprise positive lead dioxide plates of the lead-acid battery and negative plates of carbon-based electrical double-layer capacitors (EDLCs). Accordingly, a lead-carbon hybrid ultracapacitor has the features of both the battery and that of an EDLC. In this study, the development and performance comparison between the two types of lead-carbon hybrid ultracapacitors, namely those with substrate-integrated and conventional pasted positive plates, is presented as such a study is lacking in the literature. The study suggests that the faradaic efficiencies for both types of lead-carbon hybrid ultracapacitors are comparable. However, their capacitance values as well as energy and power densities differ significantly. For substrate-integrated positive plate hybrid ultracapacitor, capacitance and energy density values are lower, but power density values are higher than pasted positive plate lead-carbon hybrid ultracapacitors due to their shorter response time. Accordingly, internal resistance values are also lower for substrate-integrated lead-carbon hybrid ultracapacitors. Both types of lead-carbon hybrid ultracapacitors exhibit good cycle life of 100,000 pulse charge-discharge cycles with only a nominal loss in their capacitance values.
Resumo:
Models of river flow time series are essential in efficient management of a river basin. It helps policy makers in developing efficient water utilization strategies to maximize the utility of scarce water resource. Time series analysis has been used extensively for modeling river flow data. The use of machine learning techniques such as support-vector regression and neural network models is gaining increasing popularity. In this paper we compare the performance of these techniques by applying it to a long-term time-series data of the inflows into the Krishnaraja Sagar reservoir (KRS) from three tributaries of the river Cauvery. In this study flow data over a period of 30 years from three different observation points established in upper Cauvery river sub-basin is analyzed to estimate their contribution to KRS. Specifically, ANN model uses a multi-layer feed forward network trained with a back-propagation algorithm and support vector regression with epsilon intensive-loss function is used. Auto-regressive moving average models are also applied to the same data. The performance of different techniques is compared using performance metrics such as root mean squared error (RMSE), correlation, normalized root mean squared error (NRMSE) and Nash-Sutcliffe Efficiency (NSE).
Resumo:
The problem addressed in this paper is sound, scalable, demand-driven null-dereference verification for Java programs. Our approach consists conceptually of a base analysis, plus two major extensions for enhanced precision. The base analysis is a dataflow analysis wherein we propagate formulas in the backward direction from a given dereference, and compute a necessary condition at the entry of the program for the dereference to be potentially unsafe. The extensions are motivated by the presence of certain ``difficult'' constructs in real programs, e.g., virtual calls with too many candidate targets, and library method calls, which happen to need excessive analysis time to be analyzed fully. The base analysis is hence configured to skip such a difficult construct when it is encountered by dropping all information that has been tracked so far that could potentially be affected by the construct. Our extensions are essentially more precise ways to account for the effect of these constructs on information that is being tracked, without requiring full analysis of these constructs. The first extension is a novel scheme to transmit formulas along certain kinds of def-use edges, while the second extension is based on using manually constructed backward-direction summary functions of library methods. We have implemented our approach, and applied it on a set of real-life benchmarks. The base analysis is on average able to declare about 84% of dereferences in each benchmark as safe, while the two extensions push this number up to 91%. (C) 2014 Elsevier B.V. All rights reserved.
Resumo:
In this letter, we propose a scheme to improve the secrecy rate of cooperative networks using Analog Network Coding (ANC). ANC mixes the signals in the air; the desired signal is then separated out, from the mixed signals, at the legitimate receiver using techniques like self interference subtraction and signal nulling, thereby achieving better secrecy rates. Assuming global channel state information, memoryless adversaries and the decode-and-forward strategy, we seek to maximize the average secrecy rate between the source and the destination, subject to an overall power budget. Then, exploiting the structure of the optimization problem, we compute its optimal solution. Finally, we use numerical evaluations to compare our scheme with the conventional approaches.
Resumo:
In this paper, using the Gauge/gravity duality techniques, we explore the hydrodynamic regime of a very special class of strongly coupled QFTs that come up with an emerging UV length scale in the presence of a negative hyperscaling violating exponent. The dual gravitational counterpart for these QFTs consists of scalar dressed black brane solutions of exactly integrable Einstein-scalar gravity model with Domain Wall (DW) asymptotics. In the first part of our analysis we compute the R-charge diffusion for the boundary theory and find that (unlike the case for the pure AdS (4) black branes) it scales quite non trivially with the temperature. In the second part of our analysis, we compute the eta/s ratio both in the non extremal as well as in the extremal limit of these special class of gauge theories and it turns out to be equal to 1/4 pi in both the cases. These results therefore suggest that the quantum critical systems in the presence of (negative) hyperscaling violation at UV, might fall under a separate universality class as compared to those conventional quantum critical systems with the usual AdS (4) duals.
Resumo:
Streamflow forecasts at daily time scale are necessary for effective management of water resources systems. Typical applications include flood control, water quality management, water supply to multiple stakeholders, hydropower and irrigation systems. Conventionally physically based conceptual models and data-driven models are used for forecasting streamflows. Conceptual models require detailed understanding of physical processes governing the system being modeled. Major constraints in developing effective conceptual models are sparse hydrometric gauge network and short historical records that limit our understanding of physical processes. On the other hand, data-driven models rely solely on previous hydrological and meteorological data without directly taking into account the underlying physical processes. Among various data driven models Auto Regressive Integrated Moving Average (ARIMA), Artificial Neural Networks (ANNs) are most widely used techniques. The present study assesses performance of ARIMA and ANNs methods in arriving at one-to seven-day ahead forecast of daily streamflows at Basantpur streamgauge site that is situated at upstream of Hirakud Dam in Mahanadi river basin, India. The ANNs considered include Feed-Forward back propagation Neural Network (FFNN) and Radial Basis Neural Network (RBNN). Daily streamflow forecasts at Basantpur site find use in management of water from Hirakud reservoir. (C) 2015 The Authors. Published by Elsevier B.V.
Resumo:
A novel design for the geometric configuration of honeycombs using a seamless combination of auxetic and conventional cores- elements with negative and positive Possion ratios respectively, has been presented. The proposed design has been shown to generate a superior band gap property while retaining all major advantages of a purely conventional or purely auxetic honeycomb structure. Seamless combination ensures that joint cardinality is also retained. Several configurations involving different degree of auxeticity and different proportions auxetic and conventional elements have been analyzed. It has been shown that the preferred configurations open up wide and clean band gap at a significantly lower frequency ranges compared to their pure counterparts. In view of existence of band gaps being desired feature for the phononic applications, reported results might be appealing. Use of such design may enable superior vibration control as well. Proposed configurations can be made isovolumic and iso-weight giving designers a fairer ground of applying such configurations without significantly changing size and weight criteria.
Resumo:
High sensitivity gas sensors are typically realized using metal catalysts and nanostructured materials, utilizing non-conventional synthesis and processing techniques, incompatible with on-chip integration of sensor arrays. In this work, we report a new device architecture, suspended core-shell Pt-PtOx nanostructure that is fully CMOS-compatible. The device consists of a metal gate core, embedded within a partially suspended semiconductor shell with source and drain contacts in the anchored region. The reduced work function in suspended region, coupled with builtin electric field of metal-semiconductor junction, enables the modulation of drain current, due to room temperature Redox reactions on exposure to gas. The device architecture is validated using Pt-PtO2 suspended nanostructure for sensing H-2 down to 200 ppb under room temperature. By exploiting catalytic activity of PtO2, in conjunction with its p-type semiconducting behavior, we demonstrate about two orders of magnitude improvement in sensitivity and limit of detection, compared to the sensors reported in recent literature. Pt thin film, deposited on SiO2, is lithographically patterned and converted into suspended Pt-PtO2 sensor, in a single step isotropic SiO2 etching. An optimum design space for the sensor is elucidated with the initial Pt film thickness ranging between 10 nm and 30 nm, for low power (< 5 mu W), room temperature operation. (C) 2015 AIP Publishing LLC.
Resumo:
Semiconductor device junction temperatures are maintained within datasheet specified limits to avoid failure in power converters. Burn-in tests are used to ensure this. In inverters, thermal time constants can be large and burn-in tests are required to be performed over long durations of time. At higher power levels, besides increased production cost, the testing requires sources and loads that can handle high power. In this study, a novel method to test a high power three-phase grid-connected inverter is proposed. The method eliminates the need for high power sources and loads. Only energy corresponding to the losses is consumed. The test is done by circulating rated current within the three legs of the inverter. All the phase legs being loaded, the method can be used to test the inverter in both cases of a common or independent cooling arrangement for the inverter phase legs. Further, the method can be used with different inverter configurations - three- or four-wire and for different pulse width modulation (PWM) techniques. The method has been experimentally validated on a 24 kVA inverter for a four-wire configuration that uses sine-triangle PWM and a three-wire configuration that uses conventional space vector PWM.
Resumo:
Purpose: Composition of the coronary artery plaque is known to have critical role in heart attack. While calcified plaque can easily be diagnosed by conventional CT, it fails to distinguish between fibrous and lipid rich plaques. In the present paper, the authors discuss the experimental techniques and obtain a numerical algorithm by which the electron density (rho(e)) and the effective atomic number (Z(eff)) can be obtained from the dual energy computed tomography (DECT) data. The idea is to use this inversion method to characterize and distinguish between the lipid and fibrous coronary artery plaques. Methods: For the purpose of calibration of the CT machine, the authors prepare aqueous samples whose calculated values of (rho(e), Z(eff)) lie in the range of (2.65 x 10(23) <= rho(e) <= 3.64 x 10(23)/cm(3)) and (6.80 <= Z(eff) <= 8.90). The authors fill the phantom with these known samples and experimentally determine HU(V-1) and HU(V-2), with V-1,V-2 = 100 and 140 kVp, for the same pixels and thus determine the coefficients of inversion that allow us to determine (rho(e), Z(eff)) from the DECT data. The HU(100) and HU(140) for the coronary artery plaque are obtained by filling the channel of the coronary artery with a viscous solution of methyl cellulose in water, containing 2% contrast. These (rho(e), Z(eff)) values of the coronary artery plaque are used for their characterization on the basis of theoretical models of atomic compositions of the plaque materials. These results are compared with histopathological report. Results: The authors find that the calibration gives Pc with an accuracy of 3.5% while Z(eff) is found within 1% of the actual value, the confidence being 95%. The HU(100) and HU(140) are found to be considerably different for the same plaque at the same position and there is a linear trend between these two HU values. It is noted that pure lipid type plaques are practically nonexistent, and microcalcification, as observed in histopathology, has to be taken into account to explain the nature of the observed (rho(e), Z(eff)) data. This also enables us to judge the composition of the plaque in terms of basic model which considers the plaque to be composed of fibres, lipids, and microcalcification. Conclusions: This simple and reliable method has the potential as an effective modality to investigate the composition of noncalcified coronary artery plaques and thus help in their characterization. In this inversion method, (rho(e), Z(eff)) of the scanned sample can be found by eliminating the effects of the CT machine and also by ensuring that the determination of the two unknowns (rho(e), Z(eff)) does not interfere with each other and the nature of the plaque can be identified in terms of a three component model. (C) 2015 American Association of Physicists in Medicine.
Resumo:
Buffer leakage is an important parasitic loss mechanism in AlGaN/GaN high electron mobility transistors (HEMTs) and hence various methods are employed to grow semi-insulating buffer layers. Quantification of carrier concentration in such buffers using conventional capacitance based profiling techniques is challenging due to their fully depleted nature even at zero bias voltages. We provide a simple and effective model to extract carrier concentrations in fully depleted GaN films using capacitance-voltage (C-V) measurements. Extensive mercury probe C-V profiling has been performed on GaN films of differing thicknesses and doping levels in order to validate this model. Carrier concentrations as extracted from both the conventional C-V technique for partially depleted films having the same doping concentration, and Hall measurements show excellent agreement with those predicted by the proposed model thus establishing the utility of this technique. This model can be readily extended to estimate background carrier concentrations from the depletion region capacitances of HEMT structures and fully depleted films of any class of semiconductor materials.
Resumo:
In this article, the design and development of a Fiber Bragg Grating (FBG) based displacement sensor package for submicron level displacement measurements are presented. A linear shift of 12.12 nm in Bragg wavelength of the FBG sensor is obtained for a displacement of 6 mm with a calibration factor of 0.495 mu m/pm. Field trials have also been conducted by comparing the FBG displacement sensor package against a conventional dial gauge, on a five block masonry prism specimen loaded using three-point bending technique. The responses from both the sensors are in good agreement, up to the failure of the masonry prism. Furthermore, from the real-time displacement data recorded using FBG, it is possible to detect the time at which early creaks generated inside the body of the specimen which then prorogate to the surface to develop visible surface cracks; the respective load from the load cell can be obtained from the inflection (stress release point) in the displacement curve. Thus the developed FBG displacement sensor package can be used to detect failures in structures much earlier and to provide an adequate time to exercise necessary action, thereby avoiding the possible disaster.