954 resultados para machine tool
Resumo:
This study investigates the application of support vector clustering (SVC) for the direct identification of coherent synchronous generators in large interconnected multi-machine power systems. The clustering is based on coherency measure, which indicates the degree of coherency between any pair of generators. The proposed SVC algorithm processes the coherency measure matrix that is formulated using the generator rotor measurements to cluster the coherent generators. The proposed approach is demonstrated on IEEE 10 generator 39-bus system and an equivalent 35 generators, 246-bus system of practical Indian southern grid. The effect of number of data samples and fault locations are also examined for determining the accuracy of the proposed approach. An extended comparison with other clustering techniques is also included, to show the effectiveness of the proposed approach in grouping the data into coherent groups of generators. This effectiveness of the coherent clusters obtained with the proposed approach is compared in terms of a set of clustering validity indicators and in terms of statistical assessment that is based on the coherency degree of a generator pair.
Resumo:
Space-vector-based pulse width modulation (PWM) for a voltage source inverter (VSI) offers flexibility in terms of different switching sequences. Numerical simulation is helpful to assess the performance of a PWM method before actual implementation. A quick-simulation tool to simulate a variety of space-vector-based PWM strategies for a two-level VSI-fed squirrel cage induction motor drive is presented. The simulator is developed using C and Python programming languages, and has a graphical user interface (GUI) also. The prime focus being PWM strategies, the simulator developed is 40 times faster than MATLAB in terms of the actual time taken for a simulation. Simulation and experimental results are presented on a 5-hp ac motor drive.
Resumo:
Although many sparse recovery algorithms have been proposed recently in compressed sensing (CS), it is well known that the performance of any sparse recovery algorithm depends on many parameters like dimension of the sparse signal, level of sparsity, and measurement noise power. It has been observed that a satisfactory performance of the sparse recovery algorithms requires a minimum number of measurements. This minimum number is different for different algorithms. In many applications, the number of measurements is unlikely to meet this requirement and any scheme to improve performance with fewer measurements is of significant interest in CS. Empirically, it has also been observed that the performance of the sparse recovery algorithms also depends on the underlying statistical distribution of the nonzero elements of the signal, which may not be known a priori in practice. Interestingly, it can be observed that the performance degradation of the sparse recovery algorithms in these cases does not always imply a complete failure. In this paper, we study this scenario and show that by fusing the estimates of multiple sparse recovery algorithms, which work with different principles, we can improve the sparse signal recovery. We present the theoretical analysis to derive sufficient conditions for performance improvement of the proposed schemes. We demonstrate the advantage of the proposed methods through numerical simulations for both synthetic and real signals.
Resumo:
Climate change impact assessment studies involve downscaling large-scale atmospheric predictor variables (LSAPVs) simulated by general circulation models (GCMs) to site-scale meteorological variables. This article presents a least-square support vector machine (LS-SVM)-based methodology for multi-site downscaling of maximum and minimum daily temperature series. The methodology involves (1) delineation of sites in the study area into clusters based on correlation structure of predictands, (2) downscaling LSAPVs to monthly time series of predictands at a representative site identified in each of the clusters, (3) translation of the downscaled information in each cluster from the representative site to that at other sites using LS-SVM inter-site regression relationships, and (4) disaggregation of the information at each site from monthly to daily time scale using k-nearest neighbour disaggregation methodology. Effectiveness of the methodology is demonstrated by application to data pertaining to four sites in the catchment of Beas river basin, India. Simulations of Canadian coupled global climate model (CGCM3.1/T63) for four IPCC SRES scenarios namely A1B, A2, B1 and COMMIT were downscaled to future projections of the predictands in the study area. Comparison of results with those based on recently proposed multivariate multiple linear regression (MMLR) based downscaling method and multi-site multivariate statistical downscaling (MMSD) method indicate that the proposed method is promising and it can be considered as a feasible choice in statistical downscaling studies. The performance of the method in downscaling daily minimum temperature was found to be better when compared with that in downscaling daily maximum temperature. Results indicate an increase in annual average maximum and minimum temperatures at all the sites for A1B, A2 and B1 scenarios. The projected increment is high for A2 scenario, and it is followed by that for A1B, B1 and COMMIT scenarios. Projections, in general, indicated an increase in mean monthly maximum and minimum temperatures during January to February and October to December.
Resumo:
Multilevel inverters with dodecagonal (12-sided polygon) voltage space vector structure have advantages, such as complete elimination of fifth and seventh harmonics, reduction in electromagnetic interference, reduction in device voltage ratings, reduction of switching frequency, extension of linear modulation range, etc., making it a viable option for high-power medium-voltage drives. This paper proposes two power circuit topologies capable of generating multilevel dodecagonal voltage space vector structure with symmetric triangles (for the first time) with minimum number of dc-link power supplies and floating capacitor H-bridges. The first power topology is composed of two hybrid cascaded five-level inverters connected to either side of an open-end winding induction machine. Each inverter consists of a three-level neutral-point-clamped inverter, which is cascaded with an isolated H-bridge making it a five-level inverter. The second topology is for a normal induction motor. Both of these circuit topologies have inherent capacitor balancing for floating H-bridges for all modulation indexes, including transient operations. The proposed topologies do not require any precharging circuitry for startup. A simple pulsewidth modulation timing calculation method for space vector modulation is also presented in this paper. Due to the symmetric arrangement of congruent triangles within the voltage space vector structure, the timing computation requires only the sampled reference values and does not require any offline computation, lookup tables, or angle computation. Experimental results for steady-state operation and transient operation are also presented to validate the proposed concept.
Resumo:
Models of river flow time series are essential in efficient management of a river basin. It helps policy makers in developing efficient water utilization strategies to maximize the utility of scarce water resource. Time series analysis has been used extensively for modeling river flow data. The use of machine learning techniques such as support-vector regression and neural network models is gaining increasing popularity. In this paper we compare the performance of these techniques by applying it to a long-term time-series data of the inflows into the Krishnaraja Sagar reservoir (KRS) from three tributaries of the river Cauvery. In this study flow data over a period of 30 years from three different observation points established in upper Cauvery river sub-basin is analyzed to estimate their contribution to KRS. Specifically, ANN model uses a multi-layer feed forward network trained with a back-propagation algorithm and support vector regression with epsilon intensive-loss function is used. Auto-regressive moving average models are also applied to the same data. The performance of different techniques is compared using performance metrics such as root mean squared error (RMSE), correlation, normalized root mean squared error (NRMSE) and Nash-Sutcliffe Efficiency (NSE).
Resumo:
Simultaneous measurements of thickness and temperature profile of the lubricant film at chip-tool interface during machining have been studied in this experimental programme. Conventional techniques such as thermography can only provide temperature measurement under controlled environment in a laboratory and without the addition of lubricant. The present study builds on the capabilities of luminescent sensors in addition to direct image based observations of the chip-tool interface. A suite of experiments conducted using different types of sensors are reported in this paper, especially noteworthy are concomitant measures of thickness and temperature of the lubricant. (C) 2014 Elsevier Ltd.
Resumo:
DNA sequence and structure play a key role in imparting fragility to different regions of the genome. Recent studies have shown that non-B DNA structures play a key role in causing genomic instability, apart from their physiological roles at telomeres and promoters. Structures such as G-quadruplexes, cruciforms, and triplexes have been implicated in making DNA susceptible to breakage, resulting in genomic rearrangements. Hence, techniques that aid in the easy identification of such non-B DNA motifs will prove to be very useful in determining factors responsible for genomic instability. In this study, we provide evidence for the use of primer extension as a sensitive and specific tool to detect such altered DNA structures. We have used the G-quadruplex motif, recently characterized at the BCL2 major breakpoint region as a proof of principle to demonstrate the advantages of the technique. Our results show that pause sites corresponding to the non-B DNA are specific, since they are absent when the G-quadruplex motif is mutated and their positions change in tandem with that of the primers. The efficiency of primer extension pause sites varied according to the concentration of monovalant cations tested, which support G-quadruplex formation. Overall, our results demonstrate that primer extension is a strong in vitro tool to detect non-B DNA structures such as G-quadruplex on a plasmid DNA, which can be further adapted to identify non-B DNA structures, even at the genomic level.
Resumo:
Background: Understanding channel structures that lead to active sites or traverse the molecule is important in the study of molecular functions such as ion, ligand, and small molecule transport. Efficient methods for extracting, storing, and analyzing protein channels are required to support such studies. Further, there is a need for an integrated framework that supports computation of the channels, interactive exploration of their structure, and detailed visual analysis of their properties. Results: We describe a method for molecular channel extraction based on the alpha complex representation. The method computes geometrically feasible channels, stores both the volume occupied by the channel and its centerline in a unified representation, and reports significant channels. The representation also supports efficient computation of channel profiles that help understand channel properties. We describe methods for effective visualization of the channels and their profiles. These methods and the visual analysis framework are implemented in a software tool, CHEXVIS. We apply the method on a number of known channel containing proteins to extract pore features. Results from these experiments on several proteins show that CHEXVIS performance is comparable to, and in some cases, better than existing channel extraction techniques. Using several case studies, we demonstrate how CHEXVIS can be used to study channels, extract their properties and gain insights into molecular function. Conclusion: CHEXVIS supports the visual exploration of multiple channels together with their geometric and physico-chemical properties thereby enabling the understanding of the basic biology of transport through protein channels. The CHEXVIS web-server is freely available at http://vgl.serc.iisc.ernet.in/chexvis/. The web-server is supported on all modern browsers with latest Java plug-in.
Resumo:
This paper describes a university based system relevant to doctoral students who have problems with themselves, their peers and research supervisors. Doctoral students have various challenges to solve and these challenges contribute to delays in their thesis submission. This tool aims at helping them think through their problem in a pre-counseling stage. The tool uses narratives and hypothetical stories to walk a doctoral student through options of responses he or she can make given the situation in the narrative. Narratives were developed after a preliminary survey (n=57) of doctoral students. The survey indicated that problems they experienced were: busy supervisors, negative competition from peers and laziness with self. The narrative scenarios in the tool prompt self-reflection and provide for options to chose from leading to the next scenario that will ensue. The different stages of the stimulus-response cycles are designed based on Thomas-Kilmann conflict resolution techniques (collaboration and avoidance). Each stimulus-response cycle has a score attached that reflects the student's ability to judge a collaborative approach. At the end of all the stages a scorecard is generated indicating either a progressive or regressive outcome of thesis submission.
Resumo:
This letter presents an accurate steady-state phasor model for a doubly fed induction machine. The drawback of existing steady-state phasor model is discussed. In particular, the inconsistency of existing equivalent model with respect to reactive power flows when operated at supersynchronous speeds is highlighted. Relevant mathematical basis for the proposed model is presented and its validity is illustrated on a 2-MW doubly fed induction machine.
Resumo:
Biomechanical assays offer a good alternative to biochemical assays in diagnosing disease states and assessing the efficacy of drugs. In view of this, we have developed a miniature compliant tool to estimate the bulk stiffness of cells, particularly MCF-7 (Michigan Cancer Foundation) cells whose diameter is 12-15 mu m in suspension. The compliant tool comprises a gripper and a displacement-amplifying compliant mechanism (DaCM), where the former helps in grasping the cell and the latter enables vision-based force-sensing. A DaCM is necessary because the microscope's field of view at the required magnification is not sufficient to simultaneously observe the cell and the movement of a point on the gripper, in order to estimate the force. Therefore, a DaCMis strategically embedded within an existing gripper design leading to a composite compliant mechanism. The DaCM is designed using the kinetoelastostatic map technique to achieve a 42 nN resolution of the force. The gripper, microfabricated with SU-8 using photolithography, is within the footprint of about 10 mm by 10 mm with the smallest feature size of about 5 mu m. The experiments with MCF-7 cells suggest that the bulk stiffness of these is in the range of 8090 mN/m. The details of design, prototyping and testing comprise the paper. (C) 2015 Elsevier Ltd. All rights reserved.
Resumo:
In a practical situation, it is difficult to model exact contact conditions clue to challenges involved in the estimation of contact forces, and relative displacements between the contacting bodies. Sliding and seizure conditions were simulated on first-of-a-kind displacement controlled system. Self-mated stainless steels have been investigated in detail. Categorization of contact conditions prevailing at the contact interface has been carried out based on the variation of coefficient of friction with number of cycles, and three-dimensional fretting loops. Surface and subsurface micro-cracks have been observed, and their characteristic shows strong dependence on loading conditions. Existence of shear bands in the subsurface region has been observed for high strain and low strain rate loading conditions. Studies also include the influence of initial surface roughness on the damage under two extreme contact conditions. (C) 2013 Elsevier B.V. All rights reserved.
Resumo:
River water composition (major ion and Sr-87/Sr-86 ratio) was monitored on a monthly basis over a period of three years from a mountainous river (Nethravati River) of southwestern India. The total dissolved solid (TDS) concentration is relatively low (46 mg L-1) with silica being the dominant contributor. The basin is characterised by lower dissolved Sr concentration (avg. 150 nmol L-1), with radiogenic Sr-87/Sr-86 isotopic ratios (avg. 0.72041 at outlet). The composition of Sr and Sr-87/Sr-86 and their correlation with silicate derived cations in the river basin reveal that their dominant source is from the radiogenic silicate rock minerals. Their composition in the stream is controlled by a combination of physical and chemical weathering occurring in the basin. The molar ratio of SiO2/Ca and Sr-87/Sr-86 isotopic ratio show strong seasonal variation in the river water, i.e., low SiO2/Ca ratio with radiogenic isotopes during non-monsoon and higher SiO2/Ca with less radiogenic isotopes during monsoon season. Whereas, the seasonal variation of Rb/Sr ratio in the stream water is not significant suggesting that change in the mineral phase being involved in the weathering reaction could be unlikely for the observed molar SiO2/Ca and Sr-87/Sr-86 isotope variation in river water. Therefore, the shift in the stream water chemical composition could be attributed to contribution of ground water which is in contact with the bedrock (weathering front) during non-monsoon and weathering of secondary soil minerals in the regolith layer during monsoon. The secondary soil mineral weathering leads to limited silicate cation and enhanced silica fluxes in the Nethravati river basin. (C) 2015 Elsevier Ltd. All rights reserved.
Resumo:
Imaging flow cytometry is an emerging technology that combines the statistical power of flow cytometry with spatial and quantitative morphology of digital microscopy. It allows high-throughput imaging of cells with good spatial resolution, while they are in flow. This paper proposes a general framework for the processing/classification of cells imaged using imaging flow cytometer. Each cell is localized by finding an accurate cell contour. Then, features reflecting cell size, circularity and complexity are extracted for the classification using SVM. Unlike the conventional iterative, semi-automatic segmentation algorithms such as active contour, we propose a noniterative, fully automatic graph-based cell localization. In order to evaluate the performance of the proposed framework, we have successfully classified unstained label-free leukaemia cell-lines MOLT, K562 and HL60 from video streams captured using custom fabricated cost-effective microfluidics-based imaging flow cytometer. The proposed system is a significant development in the direction of building a cost-effective cell analysis platform that would facilitate affordable mass screening camps looking cellular morphology for disease diagnosis. Lay description In this article, we propose a novel framework for processing the raw data generated using microfluidics based imaging flow cytometers. Microfluidics microscopy or microfluidics based imaging flow cytometry (mIFC) is a recent microscopy paradigm, that combines the statistical power of flow cytometry with spatial and quantitative morphology of digital microscopy, which allows us imaging cells while they are in flow. In comparison to the conventional slide-based imaging systems, mIFC is a nascent technology enabling high throughput imaging of cells and is yet to take the form of a clinical diagnostic tool. The proposed framework process the raw data generated by the mIFC systems. The framework incorporates several steps: beginning from pre-processing of the raw video frames to enhance the contents of the cell, localising the cell by a novel, fully automatic, non-iterative graph based algorithm, extraction of different quantitative morphological parameters and subsequent classification of cells. In order to evaluate the performance of the proposed framework, we have successfully classified unstained label-free leukaemia cell-lines MOLT, K562 and HL60 from video streams captured using cost-effective microfluidics based imaging flow cytometer. The cell lines of HL60, K562 and MOLT were obtained from ATCC (American Type Culture Collection) and are separately cultured in the lab. Thus, each culture contains cells from its own category alone and thereby provides the ground truth. Each cell is localised by finding a closed cell contour by defining a directed, weighted graph from the Canny edge images of the cell such that the closed contour lies along the shortest weighted path surrounding the centroid of the cell from a starting point on a good curve segment to an immediate endpoint. Once the cell is localised, morphological features reflecting size, shape and complexity of the cells are extracted and used to develop a support vector machine based classification system. We could classify the cell-lines with good accuracy and the results were quite consistent across different cross validation experiments. We hope that imaging flow cytometers equipped with the proposed framework for image processing would enable cost-effective, automated and reliable disease screening in over-loaded facilities, which cannot afford to hire skilled personnel in large numbers. Such platforms would potentially facilitate screening camps in low income group countries; thereby transforming the current health care paradigms by enabling rapid, automated diagnosis for diseases like cancer.