970 resultados para Balance test


Relevância:

20.00% 20.00%

Publicador:

Resumo:

Detection of explosives, especially trinitrotoluene (TNT), is of utmost importance due to its highly explosive nature and environmental hazard. Therefore, detection of TNT has been a matter of great concern to the scientific community worldwide. Herein, a new aggregation-induced phosphorescent emission (AIPE)-active iridium(III) bis(2-(2,4-difluorophenyl)pyridinato-NC2') (2-(2-pyridyl)benzimidazolato-N,N') complex FIrPyBiz] has been developed and serves as a molecular probe for the detection of TNT in the vapor phase, solid phase, and aqueous media. In addition, phosphorescent test strips have been constructed by impregnating Whatman filter paper with aggregates of FIrPyBiz for trace detection of TNT in contact mode, with detection limits in nanograms, by taking advantage of the excited state interaction of AIPE-active phosphorescent iridium(III) complex with that of TNT and the associated photophysical properties.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

A scheme for built-in self-test of analog signals with minimal area overhead for measuring on-chip voltages in an all-digital manner is presented. The method is well suited for a distributed architecture, where the routing of analog signals over long paths is minimized. A clock is routed serially to the sampling heads placed at the nodes of analog test voltages. This sampling head present at each test node, which consists of a pair of delay cells and a pair of flip-flops, locally converts the test voltage to a skew between a pair of subsampled signals, thus giving rise to as many subsampled signal pairs as the number of nodes. To measure a certain analog voltage, the corresponding subsampled signal pair is fed to a delay measurement unit to measure the skew between this pair. The concept is validated by designing a test chip in a UMC 130-nm CMOS process. Sub-millivolt accuracy for static signals is demonstrated for a measurement time of a few seconds, and an effective number of bits of 5.29 is demonstrated for low-bandwidth signals in the absence of sample-and-hold circuitry.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The impact of future climate change on the glaciers in the Karakoram and Himalaya (KH) is investigated using CMIP5 multi-model temperature and precipitation projections, and a relationship between glacial accumulation-area ratio and mass balance developed for the region based on the last 30 to 40 years of observational data. We estimate that the current glacial mass balance (year 2000) for the entire KH region is -6.6 +/- 1 Gta(-1), which decreases about sixfold to -35 +/- 2 Gta(-1) by the 2080s under the high emission scenario of RCP8.5. However, under the low emission scenario of RCP2.6 the glacial mass loss only doubles to -12 +/- 2 Gta(-1) by the 2080s. We also find that 10.6 and 27 % of the glaciers could face `eventual disappearance' by the end of the century under RCP2.6 and RCP8.5 respectively, underscoring the threat to water resources under high emission scenarios.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In this paper, the governing equations for free vibration of a non-homogeneous rotating Timoshenko beam, having uniform cross-section, is studied using an inverse problem approach, for both cantilever and pinned-free boundary conditions. The bending displacement and the rotation due to bending are assumed to be simple polynomials which satisfy all four boundary conditions. It is found that for certain polynomial variations of the material mass density, elastic modulus and shear modulus, along the length of the beam, the assumed polynomials serve as simple closed form solutions to the coupled second order governing differential equations with variable coefficients. It is found that there are an infinite number of analytical polynomial functions possible for material mass density, shear modulus and elastic modulus distributions, which share the same frequency and mode shape for a particular mode. The derived results are intended to serve as benchmark solutions for testing approximate or numerical methods used for the vibration analysis of rotating non-homogeneous Timoshenko beams.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Seismic site characterization is the basic requirement for seismic microzonation and site response studies of an area. Site characterization helps to gauge the average dynamic properties of soil deposits and thus helps to evaluate the surface level response. This paper presents a seismic site characterization of Agartala city, the capital of Tripura state, in the northeast of India. Seismically, Agartala city is situated in the Bengal Basin zone which is classified as a highly active seismic zone, assigned by Indian seismic code BIS-1893, Indian Standard Criteria for Earthquake Resistant Design of Structures, Part-1 General Provisions and Buildings. According to the Bureau of Indian Standards, New Delhi (2002), it is the highest seismic level (zone-V) in the country. The city is very close to the Sylhet fault (Bangladesh) where two major earthquakes (M (w) > 7) have occurred in the past and affected severely this city and the whole of northeast India. In order to perform site response evaluation, a series of geophysical tests at 27 locations were conducted using the multichannel analysis of surface waves (MASW) technique, which is an advanced method for obtaining shear wave velocity (V (s)) profiles from in situ measurements. Similarly, standard penetration test (SPT-N) bore log data sets have been obtained from the Urban Development Department, Govt. of Tripura. In the collected data sets, out of 50 bore logs, 27 were selected which are close to the MASW test locations and used for further study. Both the data sets (V (s) profiles with depth and SPT-N bore log profiles) have been used to calculate the average shear wave velocity (V (s)30) and average SPT-N values for the upper 30 m depth of the subsurface soil profiles. These were used for site classification of the study area recommended by the National Earthquake Hazard Reduction Program (NEHRP) manual. The average V (s)30 and SPT-N classified the study area as seismic site class D and E categories, indicating that the city is susceptible to site effects and liquefaction. Further, the different data set combinations between V (s) and SPT-N (corrected and uncorrected) values have been used to develop site-specific correlation equations by statistical regression, as `V (s)' is a function of SPT-N value (corrected and uncorrected), considered with or without depth. However, after considering the data set pairs, a probabilistic approach has also been presented to develop a correlation using a quantile-quantile (Q-Q) plot. A comparison has also been made with the well known published correlations (for all soils) available in the literature. The present correlations closely agree with the other equations, but, comparatively, the correlation of shear wave velocity with the variation of depth and uncorrected SPT-N values provides a more suitable predicting model. Also the Q-Q plot agrees with all the other equations. In the absence of in situ measurements, the present correlations could be used to measure V (s) profiles of the study area for site response studies.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In this study, we applied the integration methodology developed in the companion paper by Aires (2014) by using real satellite observations over the Mississippi Basin. The methodology provides basin-scale estimates of the four water budget components (precipitation P, evapotranspiration E, water storage change Delta S, and runoff R) in a two-step process: the Simple Weighting (SW) integration and a Postprocessing Filtering (PF) that imposes the water budget closure. A comparison with in situ observations of P and E demonstrated that PF improved the estimation of both components. A Closure Correction Model (CCM) has been derived from the integrated product (SW+PF) that allows to correct each observation data set independently, unlike the SW+PF method which requires simultaneous estimates of the four components. The CCM allows to standardize the various data sets for each component and highly decrease the budget residual (P - E - Delta S - R). As a direct application, the CCM was combined with the water budget equation to reconstruct missing values in any component. Results of a Monte Carlo experiment with synthetic gaps demonstrated the good performances of the method, except for the runoff data that has a variability of the same order of magnitude as the budget residual. Similarly, we proposed a reconstruction of Delta S between 1990 and 2002 where no Gravity Recovery and Climate Experiment data are available. Unlike most of the studies dealing with the water budget closure at the basin scale, only satellite observations and in situ runoff measurements are used. Consequently, the integrated data sets are model independent and can be used for model calibration or validation.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Solar photovoltaic power plants are ideally located in regions with high insolation levels. Photovoltaic performance is affected by high cell temperatures, soiling, mismatch and other balance-of-systems related losses. It is crucial to understand the significance of each of these losses on system performance. Soiling, highly dependent on installation conditions, is a complex performance issue to accurately quantify. The settlement of dust on panel surfaces may or may not be uniform depending on local terrain and environmental factors such as ambient temperature, wind and rainfall. It is essential to investigate the influence of dust settlement on the operating characteristics of photovoltaic systems to better understand losses in performance attributable to soiling. The current voltage (I-V) characteristics of photovoltaic panels reveal extensive information to support degradation analysis of the panels. This paper attempts to understand performance losses due to dust through a dynamic study into the I-V characteristics of panels under varying soiling conditions in an outdoor experimental test-bed. Further, the results of an indoor study simulating the performance of photovoltaic panels under different dust deposition regimes are discussed in this paper. (C) 2014 Monto Mani. Published by Elsevier Ltd. This is all open access article under the CC BY-NC-ND license (http://creativecommons.org/licenses/by-nc-nd/3.0/).

Relevância:

20.00% 20.00%

Publicador:

Resumo:

An increase in the hyperpolarization-activated cyclic nucleotide-gated (HCN) channel conductance reduces input resistance, whereas the consequent increase in the inward h current depolarizes the membrane. This results in a delicate and unique conductance-current balance triggered by the expression of HCN channels. In this study, we employ experimentally constrained, morphologically realistic, conductance-based models of hippocampal neurons to explore certain aspects of this conductance-current balance. First, we found that the inclusion of an experimentally determined gradient in A-type K+ conductance, but not in M-type K+ conductance, tilts the HCN conductance-current balance heavily in favor of conductance, thereby exerting an overall restorative influence on neural excitability. Next, motivated by the well-established modulation of neuronal excitability by synaptically driven high-conductance states observed under in vivo conditions, we inserted thousands of excitatory and inhibitory synapses with different somatodendritic distributions. We measured the efficacy of HCN channels, independently and in conjunction with other channels, in altering resting membrane potential (RMP) and input resistance (R-in) when the neuron received randomized or rhythmic synaptic bombardments through variable numbers of synaptic inputs. We found that the impact of HCN channels on average RMP, R in, firing frequency, and peak-to-peak voltage response was severely weakened under high-conductance states, with the impinging synaptic drive playing a dominant role in regulating these measurements. Our results suggest that the debate on the role of HCN channels in altering excitability should encompass physiological and pathophysiological neuronal states under in vivo conditions and the spatiotemporal interactions of HCN channels with other channels.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The two-step particle synthesis mechanism, also known as the Finke-Watzky (1997) mechanism, has emerged as a significant development in the field of nanoparticle synthesis. It explains a characteristic feature of the synthesis of transition metal nanoparticles, an induction period in precursor concentration followed by its rapid sigmoidal decrease. The classical LaMer theory (1950) of particle formation fails to capture this behavior. The two-step mechanism considers slow continuous nucleation and autocatalytic growth of particles directly from precursor as its two kinetic steps. In the present work, we test the two-step mechanism rigorously using population balance models. We find that it explains precursor consumption very well, but fails to explain particle synthesis. The effect of continued nucleation on particle synthesis is not suppressed sufficiently by the rapid autocatalytic growth of particles. The nucleation continues to increase breadth of size distributions to unexpectedly large values as compared to those observed experimentally. A number of variations of the original mechanism with additional reaction steps are investigated next. The simulations show that continued nucleation from the beginning of the synthesis leads to formation of highly polydisperse particles in all of the tested cases. A short nucleation window, realized with delayed onset of nucleation and its suppression soon after in one of the variations, appears as one way to explain all of the known experimental observations. The present investigations clearly establish the need to revisit the two-step particle synthesis mechanism.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Designing and implementing thread-safe multithreaded libraries can be a daunting task as developers of these libraries need to ensure that their implementations are free from concurrency bugs, including deadlocks. The usual practice involves employing software testing and/or dynamic analysis to detect. deadlocks. Their effectiveness is dependent on well-designed multithreaded test cases. Unsurprisingly, developing multithreaded tests is significantly harder than developing sequential tests for obvious reasons. In this paper, we address the problem of automatically synthesizing multithreaded tests that can induce deadlocks. The key insight to our approach is that a subset of the properties observed when a deadlock manifests in a concurrent execution can also be observed in a single threaded execution. We design a novel, automatic, scalable and directed approach that identifies these properties and synthesizes a deadlock revealing multithreaded test. The input to our approach is the library implementation under consideration and the output is a set of deadlock revealing multithreaded tests. We have implemented our approach as part of a tool, named OMEN1. OMEN is able to synthesize multithreaded tests on many multithreaded Java libraries. Applying a dynamic deadlock detector on the execution of the synthesized tests results in the detection of a number of deadlocks, including 35 real deadlocks in classes documented as thread-safe. Moreover, our experimental results show that dynamic analysis on multithreaded tests that are either synthesized randomly or developed by third-party programmers are ineffective in detecting the deadlocks.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

A direct discretization approach and an operator-splitting scheme are applied for the numerical simulation of a population balance system which models the synthesis of urea with a uni-variate population. The problem is formulated in axisymmetric form and the setup is chosen such that a steady state is reached. Both solvers are assessed with respect to the accuracy of the results, where experimental data are used for comparison, and the efficiency of the simulations. Depending on the goal of simulations, to track the evolution of the process accurately or to reach the steady state fast, recommendations for the choice of the solver are given. (C) 2015 Elsevier Ltd. All rights reserved.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This article presents frequentist inference of accelerated life test data of series systems with independent log-normal component lifetimes. The means of the component log-lifetimes are assumed to depend on the stress variables through a linear stress translation function that can accommodate the standard stress translation functions in the literature. An expectation-maximization algorithm is developed to obtain the maximum likelihood estimates of model parameters. The maximum likelihood estimates are then further refined by bootstrap, which is also used to infer about the component and system reliability metrics at usage stresses. The developed methodology is illustrated by analyzing a real as well as a simulated dataset. A simulation study is also carried out to judge the effectiveness of the bootstrap. It is found that in this model, application of bootstrap results in significant improvement over the simple maximum likelihood estimates.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

An abundance of spectrum access and sensing algorithms are available in the dynamic spectrum access (DSA) and cognitive radio (CR) literature. Often, however, the functionality and performance of such algorithms are validated against theoretical calculations using only simulations. Both the theoretical calculations and simulations come with their attendant sets of assumptions. For instance, designers of dynamic spectrum access algorithms often take spectrum sensing and rendezvous mechanisms between transmitter-receiver pairs for granted. Test bed designers, on the other hand, either customize so much of their design that it becomes difficult to replicate using commercial off the shelf (COTS) components or restrict themselves to simulation, emulation /hardware-in-Ioop (HIL), or pure hardware but not all three. Implementation studies on test beds sophisticated enough to combine the three aforementioned aspects, but at the same time can also be put together using COTS hardware and software packages are rare. In this paper we describe i) the implementation of a hybrid test bed using a previously proposed hardware agnostic system architecture ii) the implementation of DSA on this test bed, and iii) the realistic hardware and software-constrained performance of DSA. Snapshot energy detector (ED) and Cumulative Summation (CUSUM), a sequential change detection algorithm, are available for spectrum sensing and a two-way handshake mechanism in a dedicated control channel facilitates transmitter-receiver rendezvous.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Semiconductor device junction temperatures are maintained within datasheet specified limits to avoid failure in power converters. Burn-in tests are used to ensure this. In inverters, thermal time constants can be large and burn-in tests are required to be performed over long durations of time. At higher power levels, besides increased production cost, the testing requires sources and loads that can handle high power. In this study, a novel method to test a high power three-phase grid-connected inverter is proposed. The method eliminates the need for high power sources and loads. Only energy corresponding to the losses is consumed. The test is done by circulating rated current within the three legs of the inverter. All the phase legs being loaded, the method can be used to test the inverter in both cases of a common or independent cooling arrangement for the inverter phase legs. Further, the method can be used with different inverter configurations - three- or four-wire and for different pulse width modulation (PWM) techniques. The method has been experimentally validated on a 24 kVA inverter for a four-wire configuration that uses sine-triangle PWM and a three-wire configuration that uses conventional space vector PWM.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

A supercritical CO2 test facility is currently being developed at Indian Institute of Science, Bangalore, India to analyze the performance of a closed loop Brayton cycle for concentrated solar power (CSP) generation. The loop has been designed for an external heat input of 20 kW a pressure range of 75-135 bar, flow rate of 11 kg/min, and a maximum cycle temperature of 525 degrees C. The operation of the loop and the various parametric tests planned to be performed are discussed in this paper The paper addresses various aspects of the loop design with emphasis on design of various components such as regenerator and expansion device. The regenerator design is critical due to sharp property variations in CO2 occurring during the heat exchange process between the hot and cold streams. Two types of heat exchanger configurations 1) tube-in-tube (TITHE) and 2) printed circuit heat exchanger (PCHE) are analyzed and compared. A PCHE is found to be similar to 5 times compact compared to a TITHE for identical heat transfer and pressure drops. The expansion device is being custom designed to achieve the desired pressure drop for a range of operating temperatures. It is found that capillary of 5.5 mm inner diameter and similar to 2 meter length is sufficient to achieve a pressure drop from 130 to 75 bar at a maximum cycle temperature of 525 degrees C.