263 resultados para industrial classification
Resumo:
Wireless networked control systems (WNCSs) have been widely used in the areas of manufacturing and industrial processing over the last few years. They provide real-time control with a unique characteristic: periodic traffic. These systems have a time-critical requirement. Due to current wireless mechanisms, the WNCS performance suffers from long time-varying delays, packet dropout, and inefficient channel utilization. Current wirelessly networked applications like WNCSs are designed upon the layered architecture basis. The features of this layered architecture constrain the performance of these demanding applications. Numerous efforts have attempted to use cross-layer design (CLD) approaches to improve the performance of various networked applications. However, the existing research rarely considers large-scale networks and congestion network conditions in WNCSs. In addition, there is a lack of discussions on how to apply CLD approaches in WNCSs. This thesis proposes a cross-layer design methodology to address the issues of periodic traffic timeliness, as well as to promote the efficiency of channel utilization in WNCSs. The design of the proposed CLD is highlighted by the measurement of the underlying network condition, the classification of the network state, and the adjustment of sampling period between sensors and controllers. This period adjustment is able to maintain the minimally allowable sampling period, and also maximize the control performance. Extensive simulations are conducted using the network simulator NS-2 to evaluate the performance of the proposed CLD. The comparative studies involve two aspects of communications, with and without using the proposed CLD, respectively. The results show that the proposed CLD is capable of fulfilling the timeliness requirement under congested network conditions, and is also able to improve the channel utilization efficiency and the proportion of effective data in WNCSs.
Resumo:
Next Generation Sequencing (NGS) has revolutionised molec- ular biology, allowing routine clinical sequencing. NGS data consists of short sequence reads, given context through downstream assembly and annotation, a process requiring reads consistent with the assumed species or species group. The common bacterium Staphylococcus aureus may cause severe and life-threatening infections in humans, with some strains exhibiting antibiotic resistance. Here we apply an SVM classifier to the important problem of distinguishing S. aureus sequencing projects from other pathogens, including closely related Staphylococci. Using a sequence k-mer representation, we achieve precision and recall above 95%, implicating features with important functional associations.
Resumo:
Cardiomyopathies represent a group of diseases of the myocardium of the heart and include diseases both primarily of the cardiac muscle and systemic diseases leading to adverse effects on the heart muscle size, shape, and function. Traditionally cardiomyopathies were defined according to phenotypical appearance. Now, as our understanding of the pathophysiology of the different entities classified under each of the different phenotypes improves and our knowledge of the molecular and genetic basis for these entities progresses, the traditional classifications seem oversimplistic and do not reflect current understanding of this myriad of diseases and disease processes. Although our knowledge of the exact basis of many of the disease processes of cardiomyopathies is still in its infancy, it is important to have a classification system that has the ability to incorporate the coming tide of molecular and genetic information. This paper discusses how the traditional classification of cardiomyopathies based on morphology has evolved due to rapid advances in our understanding of the genetic and molecular basis for many of these clinical entities.
Resumo:
China is experiencing rapid progress in industrialization, with its own rationale toward industrial land development based on a deliberate change from an extensive to intensive form of urban land use. One result has been concerted attempts by local government to attract foreign investment by a low industrial land price strategy, which has resulted in a disproportionally large amount of industrial land within the total urban land use structure at the expense of the urban sprawl of many cities. This paper first examines “Comparable Benchmark Price as Residential land use” (CBPR) as the theoretical basis of the low industrial land price phenomenon. Empirical findings are presented from a case study based on data from Jinyun County, China. These data are analyzed to reveal the rationale of industrial land price from 2000 to 2010 concerning the CBPR model. We then explore the causes of low industrial land prices in the form of a “Centipede Game Model”, involving two neighborhood regions as “major players” to make a set of moves (or strategies). When one of the players unilaterally reduces the land price to attract investment with the aim to maximize profits arising from the revenues generated from foreign investment and land premiums, a two-player price war begins in the form of a dynamic game, the effect of which is to produce a downward spiral of prices. In this context, the paradox of maximizing profits for each of the two players are not accomplished due to the inter-regional competition of attracted investment leading to a lose-lose situation for both sides’ in competing for land premium revenues. A short-term solution to the problem is offered involving the establishment of inter-regional cooperative partnerships. For the longer term, however, a comprehensive reform of the local financial system, more adroit regional planning and an improved means of evaluating government performance is needed to ensure the government's role in securing pubic goods is not abandoned in favor of one solely concerned with revenue generation.
Resumo:
The main contribution of this project was to investigate power electronics technology in designing and developing high frequency high power converters for industrial applications. Therefore, the research was conducted at two levels; first at system level which mainly encapsulated the circuit topology and control scheme and second at application level which involves with real-world applications. Pursuing these objectives, varied topologies have been developed and proposed within this research. The main aim was to resolving solid-state switches limited power rating and operating speed while increasing the system flexibility considering the application characteristics. The developed new power converter configurations were applied to pulsed power and high power ultrasound applications for experimental validation.
Resumo:
Highly sensitive infrared cameras can produce high-resolution diagnostic images of the temperature and vascular changes of breasts. Wavelet transform based features are suitable in extracting the texture difference information of these images due to their scale-space decomposition. The objective of this study is to investigate the potential of extracted features in differentiating between breast lesions by comparing the two corresponding pectoral regions of two breast thermograms. The pectoral regions of breastsare important because near 50% of all breast cancer is located in this region. In this study, the pectoral region of the left breast is selected. Then the corresponding pectoral region of the right breast is identified. Texture features based on the first and the second sets of statistics are extracted from wavelet decomposed images of the pectoral regions of two breast thermograms. Principal component analysis is used to reduce dimension and an Adaboost classifier to evaluate classification performance. A number of different wavelet features are compared and it is shown that complex non-separable 2D discrete wavelet transform features perform better than their real separable counterparts.
Resumo:
Particles of two isolates of subterranean clover red leaf virus were purified by a method in which infected plant tissue was digested with an industrial-grade cellulase, Celluclast® 2.0 L type X. The yields of virus particles using this enzyme were comparable with those obtained using either of two laboratory-grade cellulases, Cellulase type 1 (Sigma) and Driselase®. However, the specific infectivity or aphid transmissibility of the particles purified using Celluclast® was 10-100 times greater than those of preparations obtained using laboratory-grade cellulases or no enzyme. The main advantage of using Celluclast® is that at present in Australia its cost is only ca. 1% of laboratory-grade cellulases.
Resumo:
Textual document set has become an important and rapidly growing information source in the web. Text classification is one of the crucial technologies for information organisation and management. Text classification has become more and more important and attracted wide attention of researchers from different research fields. In this paper, many feature selection methods, the implement algorithms and applications of text classification are introduced firstly. However, because there are much noise in the knowledge extracted by current data-mining techniques for text classification, it leads to much uncertainty in the process of text classification which is produced from both the knowledge extraction and knowledge usage, therefore, more innovative techniques and methods are needed to improve the performance of text classification. It has been a critical step with great challenge to further improve the process of knowledge extraction and effectively utilization of the extracted knowledge. Rough Set decision making approach is proposed to use Rough Set decision techniques to more precisely classify the textual documents which are difficult to separate by the classic text classification methods. The purpose of this paper is to give an overview of existing text classification technologies, to demonstrate the Rough Set concepts and the decision making approach based on Rough Set theory for building more reliable and effective text classification framework with higher precision, to set up an innovative evaluation metric named CEI which is very effective for the performance assessment of the similar research, and to propose a promising research direction for addressing the challenging problems in text classification, text mining and other relative fields.
Resumo:
The detection and correction of defects remains among the most time consuming and expensive aspects of software development. Extensive automated testing and code inspections may mitigate their effect, but some code fragments are necessarily more likely to be faulty than others, and automated identification of fault prone modules helps to focus testing and inspections, thus limiting wasted effort and potentially improving detection rates. However, software metrics data is often extremely noisy, with enormous imbalances in the size of the positive and negative classes. In this work, we present a new approach to predictive modelling of fault proneness in software modules, introducing a new feature representation to overcome some of these issues. This rank sum representation offers improved or at worst comparable performance to earlier approaches for standard data sets, and readily allows the user to choose an appropriate trade-off between precision and recall to optimise inspection effort to suit different testing environments. The method is evaluated using the NASA Metrics Data Program (MDP) data sets, and performance is compared with existing studies based on the Support Vector Machine (SVM) and Naïve Bayes (NB) Classifiers, and with our own comprehensive evaluation of these methods.
Resumo:
Object classification is plagued by the issue of session variation. Session variation describes any variation that makes one instance of an object look different to another, for instance due to pose or illumination variation. Recent work in the challenging task of face verification has shown that session variability modelling provides a mechanism to overcome some of these limitations. However, for computer vision purposes, it has only been applied in the limited setting of face verification. In this paper we propose a local region based intersession variability (ISV) modelling approach, and apply it to challenging real-world data. We propose a region based session variability modelling approach so that local session variations can be modelled, termed Local ISV. We then demonstrate the efficacy of this technique on a challenging real-world fish image database which includes images taken underwater, providing significant real-world session variations. This Local ISV approach provides a relative performance improvement of, on average, 23% on the challenging MOBIO, Multi-PIE and SCface face databases. It also provides a relative performance improvement of 35% on our challenging fish image dataset.
Resumo:
There is increasing evidence of a weakened platform of consumer trust in mass produced food products. The resistance shown by consumers to the agro-industrial paradigm is evident in an emergent phase of reflexive consumerism, public reactions to an overly-concentrated retail sector and the rise of alternative food networks such as farmers' markets and organic box schemes. Supermarkets are responding strategically by aiming to manufacture new trust relations with consumers. This paper identifies three key strategies of trust manufacturing: (i) reputational enhancement though the institution of “behind the scenes,” business-to-business private standards; (ii) direct quality claims via private standard certification badges on food products, and ; (iii) discursive claimsmaking through symbolic representations of “authenticity” and “tradition.” Drawing upon the food governance literature and a “visual sociology” of supermarkets and supermarket produce, we highlight how trust is both commoditized and increasingly embedded into the marketing of mass-produced foods.
Resumo:
A cell classification algorithm that uses first, second and third order statistics of pixel intensity distributions over pre-defined regions is implemented and evaluated. A cell image is segmented into 6 regions extending from a boundary layer to an inner circle. First, second and third order statistical features are extracted from histograms of pixel intensities in these regions. Third order statistical features used are one-dimensional bispectral invariants. 108 features were considered as candidates for Adaboost based fusion. The best 10 stage fused classifier was selected for each class and a decision tree constructed for the 6-class problem. The classifier is robust, accurate and fast by design.
Resumo:
Real-time image analysis and classification onboard robotic marine vehicles, such as AUVs, is a key step in the realisation of adaptive mission planning for large-scale habitat mapping in previously unexplored environments. This paper describes a novel technique to train, process, and classify images collected onboard an AUV used in relatively shallow waters with poor visibility and non-uniform lighting. The approach utilises Förstner feature detectors and Laws texture energy masks for image characterisation, and a bag of words approach for feature recognition. To improve classification performance we propose a usefulness gain to learn the importance of each histogram component for each class. Experimental results illustrate the performance of the system in characterisation of a variety of marine habitats and its ability to operate onboard an AUV's main processor suitable for real-time mission planning.