864 resultados para System Identification


Relevância:

40.00% 40.00%

Publicador:

Resumo:

Due to vigorous globalisation and product proliferation in recent years, more waste has been produced by the soaring manufacturing activities. This has contributed to the significant need for an efficient waste management system to ensure, with all efforts, the waste is properly treated for recycling or disposed. This paper presents a Decision Support System (DSS) framework, based on Constraint Logic Programming (CLP), for the collection management of industrial waste (of all kinds) and discusses the potential employment of Radio-Frequency Identification Technology (RFID) to improve several critical procedures involved in managing waste collection. This paper also demonstrates a widely distributed and semi-structured network of waste producing enterprises (e.g. manufacturers) and waste processing enterprises (i.e. waste recycling/treatment stations) improving their operations planning by means of using the proposed DSS. The potential RFID applications to update and validate information in a continuous manner to bring value-added benefits to the waste collection business are also presented. © 2012 Inderscience Enterprises Ltd.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

International audience

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Dyslipidaemia is one of the major cardiovascular risk factors, it can be due to primary causes (i.e. monogenic, characterized by a single gene mutation, or dyslipidaemia of polygenic/environmental causes), or secondary to specific disorders such as obesity, diabetes mellitus or hypothyroidism. Monogenic patients present the most severe phenotype and so they need to be identified in early age so pharmacologic treatment can be implemented to decrease the cardiovascular risk. However the majority of hyperlipidemic patients most likely have a polygenic disease that can be mainly controlled just by the implementation of a healthy lifestyle. Thus, the distinction between monogenic and polygenic dyslipidaemia is important for a prompt diagnosis, cardiovascular risk assessment, counselling and treatment. Besides the already stated biomarkers as LDL, apoB and apoB/apoA-I ratio, other promising (yet, needing further research) biomarkers for clinical differentiation between dyslipidaemias are apoE, sdLDL, apoC-2 and apoC-3. However, none of these biomarkers can explain the complex lipid profile of the majority of these patients.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Although the benefits of service orientation are prevalent in literature, a review, analysis, and evaluation of the 30 existing service analysis approaches presented in this paper have shown that a comprehensive approach to the identification and analysis of both business and supporting software services is missing. Based on this evaluation of existing approaches and additional sources, we close this gap by proposing an integrated, consolidated approach to business and software service analysis that combines and extends the strengths of the examined methodologies.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Manufacture, construction and use of buildings and building materials make a significant environmental impact internally (inside the building), locally (neighbourhood) and globally. Life cycle assessment (LCA) methodology is being applied for evaluating the environmental impact of building/or building materials. One of the major applications of LCA is to identify key issues of a product system from cradle to grave. Key issues identified in an LCA lead one to the right direction in assessing the environmental aspects of a product system and help to identify the areas for improvement of the environmental performance of a product as well. The purpose of this paper is to suggest two methods for identifying key issues using an integrated tool (LCADesign), which has been developed to provide a method of determining the best alternative for reducing environmental impacts from a building or building materials, and compare both methods in the case study. This paper assists the designers or marketers related to building or building materials in their decision making by giving information on activities or alternatives which are identified as key issues for environmental impacts.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The engagement behaviour of 1,524 student-enrolments (“students”) in five first year units was monitored and 608 (39.9%) were classified as “at risk” using the criterion of not submitting or failing their first assignment. Of these, 327 (53.8%) were successfully contacted (i.e., spoken to by phone) and provided with advice and/or referral to learning and personal support services while the remaining 281 (46.2%) could not be contacted. Nine hundred and sixteen students (60.1%) were classified as “not at risk.” Overall, the at risk group who were contacted achieved significantly higher end-of-semester final grades than, and persisted (completed the unit) at more than twice the rate of, the at risk group who were not contacted. There were variations among the units which were explained by the timing of the first assignment, specific teaching-learning processes and the structure of the curriculum. Implications for curriculum design and supporting first year students within a personal, social and academic framework are discussed.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Purpose: Computer vision has been widely used in the inspection of electronic components. This paper proposes a computer vision system for the automatic detection, localisation, and segmentation of solder joints on Printed Circuit Boards (PCBs) under different illumination conditions. Design/methodology/approach: An illumination normalization approach is applied to an image, which can effectively and efficiently eliminate the effect of uneven illumination while keeping the properties of the processed image the same as in the corresponding image under normal lighting conditions. Consequently special lighting and instrumental setup can be reduced in order to detect solder joints. These normalised images are insensitive to illumination variations and are used for the subsequent solder joint detection stages. In the segmentation approach, the PCB image is transformed from an RGB color space to a YIQ color space for the effective detection of solder joints from the background. Findings: The segmentation results show that the proposed approach improves the performance significantly for images under varying illumination conditions. Research limitations/implications: This paper proposes a front-end system for the automatic detection, localisation, and segmentation of solder joint defects. Further research is required to complete the full system including the classification of solder joint defects. Practical implications: The methodology presented in this paper can be an effective method to reduce cost and improve quality in production of PCBs in the manufacturing industry. Originality/value: This research proposes the automatic location, identification and segmentation of solder joints under different illumination conditions.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This thesis investigates aspects of encoding the speech spectrum at low bit rates, with extensions to the effect of such coding on automatic speaker identification. Vector quantization (VQ) is a technique for jointly quantizing a block of samples at once, in order to reduce the bit rate of a coding system. The major drawback in using VQ is the complexity of the encoder. Recent research has indicated the potential applicability of the VQ method to speech when product code vector quantization (PCVQ) techniques are utilized. The focus of this research is the efficient representation, calculation and utilization of the speech model as stored in the PCVQ codebook. In this thesis, several VQ approaches are evaluated, and the efficacy of two training algorithms is compared experimentally. It is then shown that these productcode vector quantization algorithms may be augmented with lossless compression algorithms, thus yielding an improved overall compression rate. An approach using a statistical model for the vector codebook indices for subsequent lossless compression is introduced. This coupling of lossy compression and lossless compression enables further compression gain. It is demonstrated that this approach is able to reduce the bit rate requirement from the current 24 bits per 20 millisecond frame to below 20, using a standard spectral distortion metric for comparison. Several fast-search VQ methods for use in speech spectrum coding have been evaluated. The usefulness of fast-search algorithms is highly dependent upon the source characteristics and, although previous research has been undertaken for coding of images using VQ codebooks trained with the source samples directly, the product-code structured codebooks for speech spectrum quantization place new constraints on the search methodology. The second major focus of the research is an investigation of the effect of lowrate spectral compression methods on the task of automatic speaker identification. The motivation for this aspect of the research arose from a need to simultaneously preserve the speech quality and intelligibility and to provide for machine-based automatic speaker recognition using the compressed speech. This is important because there are several emerging applications of speaker identification where compressed speech is involved. Examples include mobile communications where the speech has been highly compressed, or where a database of speech material has been assembled and stored in compressed form. Although these two application areas have the same objective - that of maximizing the identification rate - the starting points are quite different. On the one hand, the speech material used for training the identification algorithm may or may not be available in compressed form. On the other hand, the new test material on which identification is to be based may only be available in compressed form. Using the spectral parameters which have been stored in compressed form, two main classes of speaker identification algorithm are examined. Some studies have been conducted in the past on bandwidth-limited speaker identification, but the use of short-term spectral compression deserves separate investigation. Combining the major aspects of the research, some important design guidelines for the construction of an identification model when based on the use of compressed speech are put forward.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Automatic spoken Language Identi¯cation (LID) is the process of identifying the language spoken within an utterance. The challenge that this task presents is that no prior information is available indicating the content of the utterance or the identity of the speaker. The trend of globalization and the pervasive popularity of the Internet will amplify the need for the capabilities spoken language identi¯ca- tion systems provide. A prominent application arises in call centers dealing with speakers speaking di®erent languages. Another important application is to index or search huge speech data archives and corpora that contain multiple languages. The aim of this research is to develop techniques targeted at producing a fast and more accurate automatic spoken LID system compared to the previous National Institute of Standards and Technology (NIST) Language Recognition Evaluation. Acoustic and phonetic speech information are targeted as the most suitable fea- tures for representing the characteristics of a language. To model the acoustic speech features a Gaussian Mixture Model based approach is employed. Pho- netic speech information is extracted using existing speech recognition technol- ogy. Various techniques to improve LID accuracy are also studied. One approach examined is the employment of Vocal Tract Length Normalization to reduce the speech variation caused by di®erent speakers. A linear data fusion technique is adopted to combine the various aspects of information extracted from speech. As a result of this research, a LID system was implemented and presented for evaluation in the 2003 Language Recognition Evaluation conducted by the NIST.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The use of artificial neural networks (ANNs) to identify and control induction machines is proposed. Two systems are presented: a system to adaptively control the stator currents via identification of the electrical dynamics, and a system to adaptively control the rotor speed via identification of the mechanical and current-fed system dynamics. Both systems are inherently adaptive as well as self-commissioning. The current controller is a completely general nonlinear controller which can be used together with any drive algorithm. Various advantages of these control schemes over conventional schemes are cited, and the combined speed and current control scheme is compared with the standard vector control scheme

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper proposes the use of artificial neural networks (ANNs) to identify and control an induction machine. Two systems are presented: a system to adaptively control the stator currents via identification of the electrical dynamics; and a system to adaptively control the rotor speed via identification of the mechanical and current-fed system dynamics. Various advantages of these control schemes over other conventional schemes are cited and the performance of the combined speed and current control scheme is compared with that of the standard vector control scheme

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Climate change is becoming increasingly apparent that is largely caused by human activities such as asset management processes, from planning to disposal, of property and infrastructure. One essential component of asset management process is asset identification. The aims of the study are to identify the information needed in asset identification and inventory as one of public asset management process in addressing the climate change issue; and to examine its deliverability in developing countries’ local governments. In order to achieve its aims, this study employs a case study in Indonesia. This study only discusses one medium size provincial government in Indonesia. The information is gathered through interviews of the local government representatives in South Sulawesi Province, Indonesia and document analysis provided by interview participants. The study found that for local government, improving the system in managing their assets is one of emerging biggest challenge. Having the right information in the right place and at the right time are critical factors in response to this challenge. Therefore, asset identification as the frontline step in public asset management system is holding an important and critical role. Furthermore, an asset identification system should be developed to support the mainstream of adaptation to climate change vulnerability and to help local government officers to be environmentally sensitive. Finally, findings from this study provide useful input for the policy makers, scholars and asset management practitioners to develop an asset inventory system as a part of public asset management process in addressing the climate change.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We describe research into the identification of anomalous events and event patterns as manifested in computer system logs. Prototype software has been developed with a capability that identifies anomalous events based on usage patterns or user profiles, and alerts administrators when such events are identified. To reduce the number of false positive alerts we have investigated the use of different user profile training techniques and introduce the use of abstractions to group together applications which are related. Our results suggest that the number of false alerts that are generated is significantly reduced when a growing time window is used for user profile training and when abstraction into groups of applications is used.