979 resultados para Fast test
Resumo:
Seismic site characterization is the basic requirement for seismic microzonation and site response studies of an area. Site characterization helps to gauge the average dynamic properties of soil deposits and thus helps to evaluate the surface level response. This paper presents a seismic site characterization of Agartala city, the capital of Tripura state, in the northeast of India. Seismically, Agartala city is situated in the Bengal Basin zone which is classified as a highly active seismic zone, assigned by Indian seismic code BIS-1893, Indian Standard Criteria for Earthquake Resistant Design of Structures, Part-1 General Provisions and Buildings. According to the Bureau of Indian Standards, New Delhi (2002), it is the highest seismic level (zone-V) in the country. The city is very close to the Sylhet fault (Bangladesh) where two major earthquakes (M (w) > 7) have occurred in the past and affected severely this city and the whole of northeast India. In order to perform site response evaluation, a series of geophysical tests at 27 locations were conducted using the multichannel analysis of surface waves (MASW) technique, which is an advanced method for obtaining shear wave velocity (V (s)) profiles from in situ measurements. Similarly, standard penetration test (SPT-N) bore log data sets have been obtained from the Urban Development Department, Govt. of Tripura. In the collected data sets, out of 50 bore logs, 27 were selected which are close to the MASW test locations and used for further study. Both the data sets (V (s) profiles with depth and SPT-N bore log profiles) have been used to calculate the average shear wave velocity (V (s)30) and average SPT-N values for the upper 30 m depth of the subsurface soil profiles. These were used for site classification of the study area recommended by the National Earthquake Hazard Reduction Program (NEHRP) manual. The average V (s)30 and SPT-N classified the study area as seismic site class D and E categories, indicating that the city is susceptible to site effects and liquefaction. Further, the different data set combinations between V (s) and SPT-N (corrected and uncorrected) values have been used to develop site-specific correlation equations by statistical regression, as `V (s)' is a function of SPT-N value (corrected and uncorrected), considered with or without depth. However, after considering the data set pairs, a probabilistic approach has also been presented to develop a correlation using a quantile-quantile (Q-Q) plot. A comparison has also been made with the well known published correlations (for all soils) available in the literature. The present correlations closely agree with the other equations, but, comparatively, the correlation of shear wave velocity with the variation of depth and uncorrected SPT-N values provides a more suitable predicting model. Also the Q-Q plot agrees with all the other equations. In the absence of in situ measurements, the present correlations could be used to measure V (s) profiles of the study area for site response studies.
Resumo:
Today's programming languages are supported by powerful third-party APIs. For a given application domain, it is common to have many competing APIs that provide similar functionality. Programmer productivity therefore depends heavily on the programmer's ability to discover suitable APIs both during an initial coding phase, as well as during software maintenance. The aim of this work is to support the discovery and migration of math APIs. Math APIs are at the heart of many application domains ranging from machine learning to scientific computations. Our approach, called MATHFINDER, combines executable specifications of mathematical computations with unit tests (operational specifications) of API methods. Given a math expression, MATHFINDER synthesizes pseudo-code comprised of API methods to compute the expression by mining unit tests of the API methods. We present a sequential version of our unit test mining algorithm and also design a more scalable data-parallel version. We perform extensive evaluation of MATHFINDER (1) for API discovery, where math algorithms are to be implemented from scratch and (2) for API migration, where client programs utilizing a math API are to be migrated to another API. We evaluated the precision and recall of MATHFINDER on a diverse collection of math expressions, culled from algorithms used in a wide range of application areas such as control systems and structural dynamics. In a user study to evaluate the productivity gains obtained by using MATHFINDER for API discovery, the programmers who used MATHFINDER finished their programming tasks twice as fast as their counterparts who used the usual techniques like web and code search, IDE code completion, and manual inspection of library documentation. For the problem of API migration, as a case study, we used MATHFINDER to migrate Weka, a popular machine learning library. Overall, our evaluation shows that MATHFINDER is easy to use, provides highly precise results across several math APIs and application domains even with a small number of unit tests per method, and scales to large collections of unit tests.
Resumo:
A simple ball-drop impact tester is developed for studying the dynamic response of hierarchical, complex, small-sized systems and materials. The developed algorithm and set-up have provisions for applying programmable potential difference along the height of a test specimen during an impact loading; this enables us to conduct experiments on various materials and smart structures whose mechanical behavior is sensitive to electric field. The software-hardware system allows not only acquisition of dynamic force-time data at very fast sampling rate (up to 2 x 10(6) samples/s), but also application of a pre-set potential difference (up to +/- 10 V) across a test specimen for a duration determined by feedback from the force-time data. We illustrate the functioning of the set-up by studying the effect of electric field on the energy absorption capability of carbon nanotube foams of 5 x 5 x 1.2 mm(3) size under impact conditions. (C) 2014 AIP Publishing LLC.
Resumo:
Heat transfer rate and pressure measurements were made upstream of surface pro-tuberances on a flat plate and a sharp cone subjected to hypersonic flow in a conventional shock tunnel. Heat flux was measured using platinum thin-film sensors deposited on macor substrate and the pressure measurements were made using fast acting piezoelectric sensors. A distinctive hot spot with highest heat flux was obtained near the foot of the protuberance due to heavy vortex activity in the recirculating region. Schlieren flow visualization was used to capture the shock structures and the separation distance ahead of the protrusions was quantitatively measured for varying protuberance heights. A computational analysis was conducted on the flat plate model using commercial computational fluid dynamics software and the obtained trends of heat flux and pressure were compared with the experimental observation. Experiments were also conducted by physically disturbing the laminar boundary layer to check its effect on the magnitude of the hot spot heat flux. In addition to air, argon was also used as test gas so that the Reynolds number can be varied. (C) 2014 AIP Publishing LLC.
Resumo:
Waveguides have been fabricated on melt-quenched, bulk chalcogenide glasses using the femto-second laser inscription technique at low repetition rates in the single scan regime. The inscribed waveguides have been characterized by butt-coupling method and the diameter of the waveguide calculated using the mode-field image of the waveguide. The waveguide cross-section symmetry is analyzed using the heat diffusion model by relating the energy and translation speed of the laser. The net-fluence and symmetry of the waveguides are correlated based on the theoretical values and experimental results of guiding cross-section.
Resumo:
Different types of Large Carbon Cluster (LCC) layers are synthesized by a single-step pyrolysis technique at various ratios of precursor mixture. The aim is to develop a fast responsive and stable thermal gauge based on a LCC layer which has relatively good electrical conduction in order to use it in the hypersonic flow field. The thermoelectric property of the LCC layer has been studied. It is found that these carbon clusters are sensitive to temperature changes. Therefore suitable thermal gauges were developed for blunt cone bodies and were tested in hypersonic shock tunnels at a flow Mach number of 6.8 to measure aerodynamic heating. The LCC layer of this thermal gauge encounters high shear forces and a hostile environment for test duration in the range of a millisecond. The results are favorable to use large carbon clusters as a better sensor than a conventional platinum thin film gauge in view of fast responsiveness and stability.
Resumo:
Designing and implementing thread-safe multithreaded libraries can be a daunting task as developers of these libraries need to ensure that their implementations are free from concurrency bugs, including deadlocks. The usual practice involves employing software testing and/or dynamic analysis to detect. deadlocks. Their effectiveness is dependent on well-designed multithreaded test cases. Unsurprisingly, developing multithreaded tests is significantly harder than developing sequential tests for obvious reasons. In this paper, we address the problem of automatically synthesizing multithreaded tests that can induce deadlocks. The key insight to our approach is that a subset of the properties observed when a deadlock manifests in a concurrent execution can also be observed in a single threaded execution. We design a novel, automatic, scalable and directed approach that identifies these properties and synthesizes a deadlock revealing multithreaded test. The input to our approach is the library implementation under consideration and the output is a set of deadlock revealing multithreaded tests. We have implemented our approach as part of a tool, named OMEN1. OMEN is able to synthesize multithreaded tests on many multithreaded Java libraries. Applying a dynamic deadlock detector on the execution of the synthesized tests results in the detection of a number of deadlocks, including 35 real deadlocks in classes documented as thread-safe. Moreover, our experimental results show that dynamic analysis on multithreaded tests that are either synthesized randomly or developed by third-party programmers are ineffective in detecting the deadlocks.
Resumo:
The local fast-spiking interneurons (FSINs) are considered to be crucial for the generation, maintenance, and modulation of neuronal network oscillations especially in the gamma frequency band. Gamma frequency oscillations have been associated with different aspects of behavior. But the prolonged effects of gamma frequency synaptic activity on the FSINs remain elusive. Using whole cell current clamp patch recordings, we observed a sustained decrease of intrinsic excitability in the FSINs of the dentate gyrus (DG) following repetitive stimulations of the mossy fibers at 30 Hz (gamma bursts). Surprisingly, the granule cells (GCs) did not express intrinsic plastic changes upon similar synaptic excitation of their apical dendritic inputs. Interestingly, pairing the gamma bursts with membrane hyperpolarization accentuated the plasticity in FSINs following the induction protocol, while the plasticity attenuated following gamma bursts paired with membrane depolarization. Paired pulse ratio measurement of the synaptic responses did not show significant changes during the experiments. However, the induction protocols were accompanied with postsynaptic calcium rise in FSINs. Interestingly, the maximum and the minimum increase occurred during gamma bursts with membrane hyperpolarization and depolarization respectively. Including a selective blocker of calcium-permeable AMPA receptors (CP-AMPARs) in the bath; significantly attenuated the calcium rise and blocked the membrane potential dependence of the calcium rise in the FSINs, suggesting their involvement in the observed phenomenon. Chelation of intracellular calcium, blocking HCN channel conductance or blocking CP-AMPARs during the experiment forbade the long lasting expression of the plasticity. Simultaneous dual patch recordings from FSINs and synaptically connected putative GCs confirmed the decreased inhibition in the GCs accompanying the decreased intrinsic excitability in the FSINs. Experimentally constrained network simulations using NEURON predicted increased spiking in the GC owing to decreased input resistance in the FSIN. We hypothesize that the selective plasticity in the FSINs induced by local network activity may serve to increase information throughput into the downstream hippocampal subfields besides providing neuroprotection to the FSINs. (c) 2014 Wiley Periodicals, Inc.
Resumo:
3-D full-wave method of moments (MoM) based electromagnetic analysis is a popular means toward accurate solution of Maxwell's equations. The time and memory bottlenecks associated with such a solution have been addressed over the last two decades by linear complexity fast solver algorithms. However, the accurate solution of 3-D full-wave MoM on an arbitrary mesh of a package-board structure does not guarantee accuracy, since the discretization may not be fine enough to capture spatial changes in the solution variable. At the same time, uniform over-meshing on the entire structure generates a large number of solution variables and therefore requires an unnecessarily large matrix solution. In this paper, different refinement criteria are studied in an adaptive mesh refinement platform. Consequently, the most suitable conductor mesh refinement criterion for MoM-based electromagnetic package-board extraction is identified and the advantages of this adaptive strategy are demonstrated from both accuracy and speed perspectives. The results are also compared with those of the recently reported integral equation-based h-refinement strategy. Finally, a new methodology to expedite each adaptive refinement pass is proposed.
Resumo:
This article presents frequentist inference of accelerated life test data of series systems with independent log-normal component lifetimes. The means of the component log-lifetimes are assumed to depend on the stress variables through a linear stress translation function that can accommodate the standard stress translation functions in the literature. An expectation-maximization algorithm is developed to obtain the maximum likelihood estimates of model parameters. The maximum likelihood estimates are then further refined by bootstrap, which is also used to infer about the component and system reliability metrics at usage stresses. The developed methodology is illustrated by analyzing a real as well as a simulated dataset. A simulation study is also carried out to judge the effectiveness of the bootstrap. It is found that in this model, application of bootstrap results in significant improvement over the simple maximum likelihood estimates.
Resumo:
The room-temperature synthesis of mono-dispersed gold nanoparticles, by the reduction of chlorauric acid (HAuCl4) with tannic acid as the reducing and stabilizing agent, is carried out in a microchannel. The microchannel is fabricated with one soft wall, so that there is a spontaneous transition to turbulence, and thereby enhanced mixing, when the flow Reynolds number increases beyond a critical value. The objective of the study is to examine whether the nanoparticle size and polydispersity can be modified by enhancing the mixing in the microchannel device. The flow rates are varied in order to study nanoparticle formation both in laminar flow and in the chaotic flow after transition, and the molar ratio of the chlorauric acid to tannic acid is also varied to study the effect of molar ratio on nanoparticle size. The formation of gold nanoparticles is examined by UV-visual spectroscopy and the size distribution is determined using scanning electron microscopy. The synthesized nanoparticles size decreases from a parts per thousand yen6 nm to a parts per thousand currency sign4 nm when the molar ratio of chlorauric acid to tannic acid is increased from 1 to 20. It is found that there is no systematic variation of nanoparticle size with flow velocity, and the nanoparticle size is not altered when the flow changes from laminar to turbulent. However, the standard deviation of the size distribution decreases by about 30% after transition, indicating that the enhanced mixing results in uniformity of particle size.
Resumo:
An abundance of spectrum access and sensing algorithms are available in the dynamic spectrum access (DSA) and cognitive radio (CR) literature. Often, however, the functionality and performance of such algorithms are validated against theoretical calculations using only simulations. Both the theoretical calculations and simulations come with their attendant sets of assumptions. For instance, designers of dynamic spectrum access algorithms often take spectrum sensing and rendezvous mechanisms between transmitter-receiver pairs for granted. Test bed designers, on the other hand, either customize so much of their design that it becomes difficult to replicate using commercial off the shelf (COTS) components or restrict themselves to simulation, emulation /hardware-in-Ioop (HIL), or pure hardware but not all three. Implementation studies on test beds sophisticated enough to combine the three aforementioned aspects, but at the same time can also be put together using COTS hardware and software packages are rare. In this paper we describe i) the implementation of a hybrid test bed using a previously proposed hardware agnostic system architecture ii) the implementation of DSA on this test bed, and iii) the realistic hardware and software-constrained performance of DSA. Snapshot energy detector (ED) and Cumulative Summation (CUSUM), a sequential change detection algorithm, are available for spectrum sensing and a two-way handshake mechanism in a dedicated control channel facilitates transmitter-receiver rendezvous.
Investigation of schemes for incorporating generator Q limits in the fast decoupled load flow method
Resumo:
Fast Decoupled Load Flow (FDLF) is a very popular and widely used power flow analysis method because of its simplicity and efficiency. Even though the basic FDLF algorithm is well investigated, the same is not true in the case of additional schemes/modifications required to obtain adjusted load flow solutions using the FDLF method. Handling generator Q limits is one such important feature needed in any practical load flow method. This paper presents a comprehensive investigation of two classes of schemes intended to handle this aspect i.e. the bus type switching scheme and the sensitivity scheme. We propose two new sensitivity based schemes and assess their performance in comparison with the existing schemes. In addition, a new scheme to avoid the possibility of anomalous solutions encountered while using the conventional schemes is also proposed and evaluated. Results from extensive simulation studies are provided to highlight the strengths and weaknesses of these existing and proposed schemes, especially from the point of view of reliability.
Resumo:
Semiconductor device junction temperatures are maintained within datasheet specified limits to avoid failure in power converters. Burn-in tests are used to ensure this. In inverters, thermal time constants can be large and burn-in tests are required to be performed over long durations of time. At higher power levels, besides increased production cost, the testing requires sources and loads that can handle high power. In this study, a novel method to test a high power three-phase grid-connected inverter is proposed. The method eliminates the need for high power sources and loads. Only energy corresponding to the losses is consumed. The test is done by circulating rated current within the three legs of the inverter. All the phase legs being loaded, the method can be used to test the inverter in both cases of a common or independent cooling arrangement for the inverter phase legs. Further, the method can be used with different inverter configurations - three- or four-wire and for different pulse width modulation (PWM) techniques. The method has been experimentally validated on a 24 kVA inverter for a four-wire configuration that uses sine-triangle PWM and a three-wire configuration that uses conventional space vector PWM.
Resumo:
The time division multiple access (TDMA) based channel access mechanisms perform better than the contention based channel access mechanisms, in terms of channel utilization, reliability and power consumption, specially for high data rate applications in wireless sensor networks (WSNs). Most of the existing distributed TDMA scheduling techniques can be classified as either static or dynamic. The primary purpose of static TDMA scheduling algorithms is to improve the channel utilization by generating a schedule of smaller length. But, they usually take longer time to schedule, and hence, are not suitable for WSNs, in which the network topology changes dynamically. On the other hand, dynamic TDMA scheduling algorithms generate a schedule quickly, but they are not efficient in terms of generated schedule length. In this paper, we propose a novel scheme for TDMA scheduling in WSNs, which can generate a compact schedule similar to static scheduling algorithms, while its runtime performance can be matched with those of dynamic scheduling algorithms. Furthermore, the proposed distributed TDMA scheduling algorithm has the capability to trade-off schedule length with the time required to generate the schedule. This would allow the developers of WSNs, to tune the performance, as per the requirement of prevalent WSN applications, and the requirement to perform re-scheduling. Finally, the proposed TDMA scheduling is fault-tolerant to packet loss due to erroneous wireless channel. The algorithm has been simulated using the Castalia simulator to compare its performance with those of others in terms of generated schedule length and the time required to generate the TDMA schedule. Simulation results show that the proposed algorithm generates a compact schedule in a very less time.