978 resultados para Rotating electrical machine
Resumo:
This chapter is a tutorial that teaches you how to design extended finite state machine (EFSM) test models for a system that you want to test. EFSM models are more powerful and expressive than simple finite state machine (FSM) models, and are one of the most commonly used styles of models for model-based testing, especially for embedded systems. There are many languages and notations in use for writing EFSM models, but in this tutorial we write our EFSM models in the familiar Java programming language. To generate tests from these EFSM models we use ModelJUnit, which is an open-source tool that supports several stochastic test generation algorithms, and we also show how to write your own model-based testing tool. We show how EFSM models can be used for unit testing and system testing of embedded systems, and for offline testing as well as online testing.
Resumo:
The reliable operation of the electrical system at Callide Power Station is of extreme importance to the normal everyday running of the Station. This study applied the principles of reliability to do an analysis on the electrical system at Callide Power Station. It was found that the level of expected outage cost increased exponentially with a declining level of maintenance. Concluding that even in a harsh economic electricity market where CS Energy tries and push their plants to the limit, maintenance must not be neglected. A number of system configurations were found to increase the reliability of the system and reduce the expected outage costs. A number of other advantages were identified as a result of using reliability principles to do this study on the Callide electrical system configuration.
Resumo:
Abstract. For interactive systems, recognition, reproduction, and generalization of observed motion data are crucial for successful interaction. In this paper, we present a novel method for analysis of motion data that we refer to as K-OMM-trees. K-OMM-trees combine Ordered Means Models (OMMs) a model-based machine learning approach for time series with an hierarchical analysis technique for very large data sets, the K-tree algorithm. The proposed K-OMM-trees enable unsupervised prototype extraction of motion time series data with hierarchical data representation. After introducing the algorithmic details, we apply the proposed method to a gesture data set that includes substantial inter-class variations. Results from our studies show that K-OMM-trees are able to substantially increase the recognition performance and to learn an inherent data hierarchy with meaningful gesture abstractions.
Resumo:
In present work, numerical solution is performed to study the confined flow of power-law non Newtonian fluids over a rotating cylinder. The main purpose is to evaluate drag and thermal coefficients as functions of the related governing dimensionless parameters, namely, power-law index (0.5 ≤ n ≤ 1.4), dimensionless rotational velocity (0 ≤ α ≤ 6) and the Reynolds number (100 ≤ Re ≤ 500). Over the range of Reynolds number, the flow is known to be steady. Results denoted that the increment of power law index and rotational velocity increases the drag coefficient due to momentum diffusivity improvement which is responsible for low rate of heat transfer, because the thicker the boundary layer, the lower the heat transfer is implemented.
Resumo:
Superconducting thick films of Bi2Sr2CaCu2Oy (Bi-2212) on single-crystalline (100) MgO substrates have been prepared using a doctor-blade technique and a partial-melt process. It is found that the phase composition and the amount of Ag addition to the paste affect the structure and superconducting properties of the partially melted thick films. The optimum heat treatment schedule for obtaining high Jc has been determined for each paste. The heat treatment ensures attainment of high purity for the crystalline Bi-2212 phase and high orientation of Bi-2212 crystals, in which the c-axis is perpendicular to the substrate. The highest Tc, obtained by resistivity measurement, is 92.2 K. The best value for Jct (transport) of these thick films, measured at 77 K in self-field, is 8 × 10 3 Acm -2.
Resumo:
This item provides supplementary materials for the paper mentioned in the title, specifically a range of organisms used in the study. The full abstract for the main paper is as follows: Next Generation Sequencing (NGS) technologies have revolutionised molecular biology, allowing clinical sequencing to become a matter of routine. NGS data sets consist of short sequence reads obtained from the machine, given context and meaning through downstream assembly and annotation. For these techniques to operate successfully, the collected reads must be consistent with the assumed species or species group, and not corrupted in some way. The common bacterium Staphylococcus aureus may cause severe and life-threatening infections in humans,with some strains exhibiting antibiotic resistance. In this paper, we apply an SVM classifier to the important problem of distinguishing S. aureus sequencing projects from alternative pathogens, including closely related Staphylococci. Using a sequence k-mer representation, we achieve precision and recall above 95%, implicating features with important functional associations.
Resumo:
In this paper we propose a framework for both gradient descent image and object alignment in the Fourier domain. Our method centers upon the classical Lucas & Kanade (LK) algorithm where we represent the source and template/model in the complex 2D Fourier domain rather than in the spatial 2D domain. We refer to our approach as the Fourier LK (FLK) algorithm. The FLK formulation is advantageous when one pre-processes the source image and template/model with a bank of filters (e.g. oriented edges, Gabor, etc.) as: (i) it can handle substantial illumination variations, (ii) the inefficient pre-processing filter bank step can be subsumed within the FLK algorithm as a sparse diagonal weighting matrix, (iii) unlike traditional LK the computational cost is invariant to the number of filters and as a result far more efficient, and (iv) this approach can be extended to the inverse compositional form of the LK algorithm where nearly all steps (including Fourier transform and filter bank pre-processing) can be pre-computed leading to an extremely efficient and robust approach to gradient descent image matching. Further, these computational savings translate to non-rigid object alignment tasks that are considered extensions of the LK algorithm such as those found in Active Appearance Models (AAMs).
Resumo:
The increase of online services, such as eBanks, WebMails, in which users are verified by a username and password, is increasingly exploited by Identity Theft procedures. Identity Theft is a fraud, in which someone pretends to be someone else is order to steal money or get other benefits. To overcome the problem of Identity Theft an additional security layer is required. Within the last decades the option of verifying users based on their keystroke dynamics was proposed during login verification. Thus, the imposter has to be able to type in a similar way to the real user in addition to having the username and password. However, verifying users upon login is not enough, since a logged station/mobile is vulnerable for imposters when the user leaves her machine. Thus, verifying users continuously based on their activities is required. Within the last decade there is a growing interest and use of biometrics tools, however, these are often costly and require additional hardware. Behavioral biometrics, in which users are verified, based on their keyboard and mouse activities, present potentially a good solution. In this paper we discuss the problem of Identity Theft and propose behavioral biometrics as a solution. We survey existing studies and list the challenges and propose solutions.
Resumo:
Modern mobile computing devices are versatile, but bring the burden of constant settings adjustment according to the current conditions of the environment. While until today, this task has to be accomplished by the human user, the variety of sensors usually deployed in such a handset provides enough data for autonomous self-configuration by a learning, adaptive system. However, this data is not fully available at certain points in time, or can contain false values. Handling potentially incomplete sensor data to detect context changes without a semantic layer represents a scientific challenge which we address with our approach. A novel machine learning technique is presented - the Missing-Values-SOM - which solves this problem by predicting setting adjustments based on context information. Our method is centered around a self-organizing map, extending it to provide a means of handling missing values. We demonstrate the performance of our approach on mobile context snapshots, as well as on classical machine learning datasets.
Resumo:
The measurement of ventilation distribution is currently performed using inhaled tracer gases for multiple breath inhalation studies or imaging techniques to quantify spatial gas distribution. Most tracer gases used for these studies have properties different from that of air. The effect of gas density on regional ventilation distribution has not been studied. This study aimed to measure the effect of gas density on regional ventilation distribution. Methods Ventilation distribution was measured in seven rats using electrical impedance tomography (EIT) in supine, prone, left and right lateral positions while being mechanically ventilated with either air, heliox (30% oxygen, 70% helium) or sulfur hexafluoride (20% SF6, 20% oxygen, 60% air). The effect of gas density on regional ventilation distribution was assessed. Results Gas density did not impact on regional ventilation distribution. The non-dependent lung was better ventilated in all four body positions. Gas density had no further impact on regional filling characteristics. The filling characteristics followed an anatomical pattern with the anterior and left lung showing a greater impedance change during the initial phase of the inspiration. Conclusion It was shown that gas density did not impact on convection dependent ventilation distribution in rats measured with EIT.
Resumo:
Background: Hyperpolarised helium MRI (He3 MRI) is a new technique that enables imaging of the air distribution within the lungs. This allows accurate determination of the ventilation distribution in vivo. The technique has the disadvantages of requiring an expensive helium isotope, complex apparatus and moving the patient to a compatible MRI scanner. Electrical impedance tomography (EIT) a non-invasive bedside technique that allows constant monitoring of lung impedance, which is dependent on changes in air space capacity in the lung. We have used He3MRI measurements of ventilation distribution as the gold standard for assessment of EIT. Methods: Seven rats were ventilated in supine, prone, left and right lateral position with 70% helium/30% oxygen for EIT measurements and pure helium for He3 MRI. The same ventilator and settings were used for both measurements. Image dimensions, geometric centre and global in homogeneity index were calculated. Results: EIT images were smaller and of lower resolution and contained less anatomical detail than those from He3 MRI. However, both methods could measure positional induced changes in lung ventilation, as assessed by the geometric centre. The global in homogeneity index were comparable between the techniques. Conclusion: EIT is a suitable technique for monitoring ventilation distribution and inhomgeneity as assessed by comparison with He3 MRI.
Resumo:
Finding and labelling semantic features patterns of documents in a large, spatial corpus is a challenging problem. Text documents have characteristics that make semantic labelling difficult; the rapidly increasing volume of online documents makes a bottleneck in finding meaningful textual patterns. Aiming to deal with these issues, we propose an unsupervised documnent labelling approach based on semantic content and feature patterns. A world ontology with extensive topic coverage is exploited to supply controlled, structured subjects for labelling. An algorithm is also introduced to reduce dimensionality based on the study of ontological structure. The proposed approach was promisingly evaluated by compared with typical machine learning methods including SVMs, Rocchio, and kNN.
Resumo:
Data structures such as k-D trees and hierarchical k-means trees perform very well in approximate k nearest neighbour matching, but are only marginally more effective than linear search when performing exact matching in high-dimensional image descriptor data. This paper presents several improvements to linear search that allows it to outperform existing methods and recommends two approaches to exact matching. The first method reduces the number of operations by evaluating the distance measure in order of significance of the query dimensions and terminating when the partial distance exceeds the search threshold. This method does not require preprocessing and significantly outperforms existing methods. The second method improves query speed further by presorting the data using a data structure called d-D sort. The order information is used as a priority queue to reduce the time taken to find the exact match and to restrict the range of data searched. Construction of the d-D sort structure is very simple to implement, does not require any parameter tuning, and requires significantly less time than the best-performing tree structure, and data can be added to the structure relatively efficiently.
Resumo:
Background Predicting protein subnuclear localization is a challenging problem. Some previous works based on non-sequence information including Gene Ontology annotations and kernel fusion have respective limitations. The aim of this work is twofold: one is to propose a novel individual feature extraction method; another is to develop an ensemble method to improve prediction performance using comprehensive information represented in the form of high dimensional feature vector obtained by 11 feature extraction methods. Methodology/Principal Findings A novel two-stage multiclass support vector machine is proposed to predict protein subnuclear localizations. It only considers those feature extraction methods based on amino acid classifications and physicochemical properties. In order to speed up our system, an automatic search method for the kernel parameter is used. The prediction performance of our method is evaluated on four datasets: Lei dataset, multi-localization dataset, SNL9 dataset and a new independent dataset. The overall accuracy of prediction for 6 localizations on Lei dataset is 75.2% and that for 9 localizations on SNL9 dataset is 72.1% in the leave-one-out cross validation, 71.7% for the multi-localization dataset and 69.8% for the new independent dataset, respectively. Comparisons with those existing methods show that our method performs better for both single-localization and multi-localization proteins and achieves more balanced sensitivities and specificities on large-size and small-size subcellular localizations. The overall accuracy improvements are 4.0% and 4.7% for single-localization proteins and 6.5% for multi-localization proteins. The reliability and stability of our classification model are further confirmed by permutation analysis. Conclusions It can be concluded that our method is effective and valuable for predicting protein subnuclear localizations. A web server has been designed to implement the proposed method. It is freely available at http://bioinformatics.awowshop.com/snlpred_page.php.
Resumo:
Nano-tin oxide was deposited on the surface of wollastonite using the mixed solution including stannic chloride pentahydrate precursor and wollastonite by a hydrolysis precipitation process. The antistatic properties of the wollastonite materials under different calcined conditions and composite materials (nano-SnO2/wollastonite, SW) were measured by rubber sheeter and four-point probe (FPP) sheet resistance measurement. Effects of hydrolysis temperature and time, calcination temperature and time, pH value and nano-SnO2 coating amount on the resistivity of SW powders were studied, and the optimum experimental conditions were obtained. The microstructure and surface properties of wollastonite, precipitate and SW were characterized by transmission electron microscopy (TEM), scanning electron microscopy (SEM), energy-dispersive X-ray spectrometry (EDS), specific surface area analyzer (BET), thermogravimetry (TG), X-ray photoelectron spectroscopy (XPS), X-ray diffraction (XRD), and Fourier translation infrared spectroscopy (FTIR) respectively. The results showed that the nano-SnO2/wollastonite composite materials under optimum preparation conditions showed better antistatic properties, the resistivity of which was reduced from 1.068 × 104 Ω cm to 2.533 × 103 Ω cm. From TG and XRD analysis, the possible mechanism for coating of SnO2 nanoparticles on the surface of wollastonite was proposed. The infrared spectrum indicated that there were a large number of the hydroxyl groups on the surface of wollastonite. This is beneficial to the heterogeneous nucleation reaction. Through morphology, EDS and XPS analysis, the surface of wollastonite fiber was coated with a layer of 10–15 nm thickness of tin oxide grains the distribution of which was uniform.