12 resultados para Speeded up robust features (SURF)
em Aston University Research Archive
Resumo:
Synchronous reluctance motors (SynRMs) are gaining in popularity in industrial drives due to their permanent magnet-free, competitive performance, and robust features. This paper studies the power losses in a 90-kW converter-fed SynRM drive by a calorimetric method in comparison of the traditional input-output method. After the converter and the motor were measured simultaneously in separate chambers, the converter was installed inside the large-size chamber next to the motor and the total drive system losses were obtained using one chamber. The uncertainty of both measurement methods is analyzed and discussed.
Resumo:
This research is concerned with the development of distributed real-time systems, in which software is used for the control of concurrent physical processes. These distributed control systems are required to periodically coordinate the operation of several autonomous physical processes, with the property of an atomic action. The implementation of this coordination must be fault-tolerant if the integrity of the system is to be maintained in the presence of processor or communication failures. Commit protocols have been widely used to provide this type of atomicity and ensure consistency in distributed computer systems. The objective of this research is the development of a class of robust commit protocols, applicable to the coordination of distributed real-time control systems. Extended forms of the standard two phase commit protocol, that provides fault-tolerant and real-time behaviour, were developed. Petri nets are used for the design of the distributed controllers, and to embed the commit protocol models within these controller designs. This composition of controller and protocol model allows the analysis of the complete system in a unified manner. A common problem for Petri net based techniques is that of state space explosion, a modular approach to both the design and analysis would help cope with this problem. Although extensions to Petri nets that allow module construction exist, generally the modularisation is restricted to the specification, and analysis must be performed on the (flat) detailed net. The Petri net designs for the type of distributed systems considered in this research are both large and complex. The top down, bottom up and hybrid synthesis techniques that are used to model large systems in Petri nets are considered. A hybrid approach to Petri net design for a restricted class of communicating processes is developed. Designs produced using this hybrid approach are modular and allow re-use of verified modules. In order to use this form of modular analysis, it is necessary to project an equivalent but reduced behaviour on the modules used. These projections conceal events local to modules that are not essential for the purpose of analysis. To generate the external behaviour, each firing sequence of the subnet is replaced by an atomic transition internal to the module, and the firing of these transitions transforms the input and output markings of the module. Thus local events are concealed through the projection of the external behaviour of modules. This hybrid design approach preserves properties of interest, such as boundedness and liveness, while the systematic concealment of local events allows the management of state space. The approach presented in this research is particularly suited to distributed systems, as the underlying communication model is used as the basis for the interconnection of modules in the design procedure. This hybrid approach is applied to Petri net based design and analysis of distributed controllers for two industrial applications that incorporate the robust, real-time commit protocols developed. Temporal Petri nets, which combine Petri nets and temporal logic, are used to capture and verify causal and temporal aspects of the designs in a unified manner.
Resumo:
This study is a consumer-survey conducted with former Marriage Guidance Council clients. The objectives were to identify and examine why they chose the agency, what their expectations and experiences were of marital counselling and whether anything was achieved. The material was derived from tape recorded interviews with 51 former M.G. clients (17 men and 34 women) from 42 marriages and with 21 counsellors; data from written material and a card-sort completed by the research sample; and the case record sheets of the research population (174 cases). The results from the written data of clients showed that 49% were satisfied with counselling, 25.5% were satisfied in some ways but not in others, and 25.5% were dissatisfied. Forty-six percent rated they had benefited from counselling, either a great deal or to some degree, 4% were neutral and 50% recorded they had not benefited. However the counsellors' assessments were more optimistic. It was also ascertained that 50% of the research sample eventually separated or divorced subsequent to counselling. A cross-check revealed that the majority who rated they were satisfied with counselling were those who remained married, whilst dissatisfied clients were the ones who unwillingly separated or divorced. The study then describes, discusses and assesses the experiences of clients in the light of these findings on a number of dimensions. From this it was possible to construct a summary profile of a "successful" client describing the features which would contribute to "success". Two key themes emerged from the data. (1) the discrepancy between clients expectations and the counselling offered, which included mis match over the aims and methods of counselling, and problem definition; and (2) the importance of the client/counsellor relationship. The various implications for the agency are then discussed which include recommendations on policy, the training of counsellors and further research.
Resumo:
Tonal, textural and contextual properties are used in manual photointerpretation of remotely sensed data. This study has used these three attributes to produce a lithological map of semi arid northwest Argentina by semi automatic computer classification procedures of remotely sensed data. Three different types of satellite data were investigated, these were LANDSAT MSS, TM and SIR-A imagery. Supervised classification procedures using tonal features only produced poor classification results. LANDSAT MSS produced classification accuracies in the range of 40 to 60%, while accuracies of 50 to 70% were achieved using LANDSAT TM data. The addition of SIR-A data produced increases in the classification accuracy. The increased classification accuracy of TM over the MSS is because of the better discrimination of geological materials afforded by the middle infra red bands of the TM sensor. The maximum likelihood classifier consistently produced classification accuracies 10 to 15% higher than either the minimum distance to means or decision tree classifier, this improved accuracy was obtained at the cost of greatly increased processing time. A new type of classifier the spectral shape classifier, which is computationally as fast as a minimum distance to means classifier is described. However, the results for this classifier were disappointing, being lower in most cases than the minimum distance or decision tree procedures. The classification results using only tonal features were felt to be unacceptably poor, therefore textural attributes were investigated. Texture is an important attribute used by photogeologists to discriminate lithology. In the case of TM data, texture measures were found to increase the classification accuracy by up to 15%. However, in the case of the LANDSAT MSS data the use of texture measures did not provide any significant increase in the accuracy of classification. For TM data, it was found that second order texture, especially the SGLDM based measures, produced highest classification accuracy. Contextual post processing was found to increase classification accuracy and improve the visual appearance of classified output by removing isolated misclassified pixels which tend to clutter classified images. Simple contextual features, such as mode filters were found to out perform more complex features such as gravitational filter or minimal area replacement methods. Generally the larger the size of the filter, the greater the increase in the accuracy. Production rules were used to build a knowledge based system which used tonal and textural features to identify sedimentary lithologies in each of the two test sites. The knowledge based system was able to identify six out of ten lithologies correctly.
Resumo:
In the case of surgical scalpels, blade retraction and disposability have been incorporated into a number of commercial designs to address sharps injury and infection transmission issues. Despite these new designs, the traditional metal reusable scalpel is still extensively used and this paper attempts to determine whether the introduction of safety features has compromised the ergonomics and so potentially the take-up of the newer designs. Examples of scalpels have been analysed to determine the ergonomic impact of these design changes. Trials and questionnaires were carried out using both clinical and non-clinical user groups, with the trials making use of assessment of incision quality, cutting force, electromyography and video monitoring. The results showed that ergonomic performance was altered by the design changes and that while these could be for the worse, the introduction of safety features could act as a catalyst to encourage re-evaluation of the ergonomic demands of a highly traditional product.
Resumo:
MRI of fluids containing lipid coated microbubbles has been shown to be an effective toot for measuring the local fluid pressure. However, the intrinsically buoyant nature of these microbubbles precludes lengthy measurements due to their vertical migration under gravity and pressure-induced coalescence. A novel preparation is presented which is shown to minimize both these effects for at least 25 min. By using a 2% polysaccharide gel base with a small concentration of glycerol and 1,2-distearoyl-sn-glycero-3-phosphocholine coated gas microbubbles, MR measurements are made for pressures between 0.95 and 1.44 bar. The signal drifts due to migration and amalgamation are shown to be minimized for such an experiment whilst yielding very high NMR sensitivities up to 38% signal change per bar.
Resumo:
This paper investigates empirically the importance of technological catch-up in explaining productivity growth in a sample of countries since the 1960s. New proxies for a country's absorptive capability—based on data for students studying abroad, telecommunications and publications—are tested in regression models. The results indicate that absorptive capability is a factor in explaining growth, with the most robust finding that countries with relatively high numbers of students studying science or engineering abroad experience faster subsequent growth. However, the paper also indicates that the significance of coefficients varies across specifications and samples, suggesting caution in focusing on individual results.
Resumo:
The accurate identification of T-cell epitopes remains a principal goal of bioinformatics within immunology. As the immunogenicity of peptide epitopes is dependent on their binding to major histocompatibility complex (MHC) molecules, the prediction of binding affinity is a prerequisite to the reliable prediction of epitopes. The iterative self-consistent (ISC) partial-least-squares (PLS)-based additive method is a recently developed bioinformatic approach for predicting class II peptide−MHC binding affinity. The ISC−PLS method overcomes many of the conceptual difficulties inherent in the prediction of class II peptide−MHC affinity, such as the binding of a mixed population of peptide lengths due to the open-ended class II binding site. The method has applications in both the accurate prediction of class II epitopes and the manipulation of affinity for heteroclitic and competitor peptides. The method is applied here to six class II mouse alleles (I-Ab, I-Ad, I-Ak, I-As, I-Ed, and I-Ek) and included peptides up to 25 amino acids in length. A series of regression equations highlighting the quantitative contributions of individual amino acids at each peptide position was established. The initial model for each allele exhibited only moderate predictivity. Once the set of selected peptide subsequences had converged, the final models exhibited a satisfactory predictive power. Convergence was reached between the 4th and 17th iterations, and the leave-one-out cross-validation statistical terms - q2, SEP, and NC - ranged between 0.732 and 0.925, 0.418 and 0.816, and 1 and 6, respectively. The non-cross-validated statistical terms r2 and SEE ranged between 0.98 and 0.995 and 0.089 and 0.180, respectively. The peptides used in this study are available from the AntiJen database (http://www.jenner.ac.uk/AntiJen). The PLS method is available commercially in the SYBYL molecular modeling software package. The resulting models, which can be used for accurate T-cell epitope prediction, will be made freely available online (http://www.jenner.ac.uk/MHCPred).
Resumo:
Inspired by human visual cognition mechanism, this paper first presents a scene classification method based on an improved standard model feature. Compared with state-of-the-art efforts in scene classification, the newly proposed method is more robust, more selective, and of lower complexity. These advantages are demonstrated by two sets of experiments on both our own database and standard public ones. Furthermore, occlusion and disorder problems in scene classification in video surveillance are also first studied in this paper. © 2010 IEEE.
Resumo:
This paper proposes a novel dc-dc converter topology to achieve an ultrahigh step-up ratio while maintaining a high conversion efficiency. It adopts a three degree of freedom approach in the circuit design. It also demonstrates the flexibility of the proposed converter to combine with the features of modularity, electrical isolation, soft-switching, low voltage stress on switching devices, and is thus considered to be an improved topology over traditional dc-dc converters. New control strategies including the two-section output voltage control and cell idle control are also developed to improve the converter performance. With the cell idle control, the secondary winding inductance of the idle module is bypassed to decrease its power loss. A 400-W dc-dc converter is prototyped and tested to verify the proposed techniques, in addition to a simulation study. The step-up conversion ratio can reach 1:14 with a peak efficiency of 94% and the proposed techniques can be applied to a wide range of high voltage and high power distributed generation and dc power transmission.
Resumo:
In ensuring the quality of learning and teaching in Higher Education, self-evaluation is an important component of the process. An example would be the approach taken within the CDIO community whereby self-evaluation against the CDIO standards is part of the quality assurance process. Eight European universities (Reykjavik University, Iceland; Turku University of Applied Sciences, Finland; Aarhus University, Denmark; Helsinki Metropolia University of Applied Sciences, Finland; Ume? University, Sweden; Telecom Bretagne, France; Aston University, United Kingdom; Queens University Belfast, United Kingdom) are engaged in an EU funded Erasmus + project that is exploring the quality assurance process associated with active learning. The development of a new self-evaluation framework that feeds into a ?Marketplace? where participating institutions can be paired up and then engage in peer evaluations and sharing around each institutions approach to and implementation of active learning. All of the partner institutions are engaged in the application of CDIO within their engineering programmes and this has provided a common starting point for the partnership to form and the project to be developed. Although the initial focus will be CDIO, the longer term aim is that the approach could be of value beyond CDIO and within other disciplines. The focus of this paper is the process by which the self-evaluation framework is being developed and the form of the draft framework. In today?s Higher Education environment, the need to comply with Quality Assurance standards is an ever present feature of programme development and review. When engaging in a project that spans several countries, the wealth of applicable standards and guidelines is significant. In working towards the development of a robust Self Evaluation Framework for this project, the project team decided to take a wide view of the available resources to ensure a full consideration of different requirements and practices. The approach to developing the framework considered: a) institutional standards and processes b) national standards and processes e.g. QAA in the UK c) documents relating to regional / global accreditation schemes e.g. ABET d) requirements / guidelines relating to particular learning and teaching frameworks e.g. CDIO. The resulting draft self-evaluation framework is to be implemented within the project team to start with to support the initial ?Marketplace? pairing process. Following this initial work, changes will be considered before a final version is made available as part of the project outputs. Particular consideration has been paid to the extent of the framework, as a key objective of the project is to ensure that the approach to quality assurance has impact but is not overly demanding in terms of time or paperwork. In other words that it is focused on action and value added to staff, students and the programmes being considered.
Resumo:
In this paper we compare the robustness of several types of stylistic markers to help discriminate authorship at sentence level. We train a SVM-based classifier using each set of features separately and perform sentence-level authorship analysis over corpus of editorials published in a Portuguese quality newspaper. Results show that features based on POS information, punctuation and word / sentence length contribute to a more robust sentence-level authorship analysis. © Springer-Verlag Berlin Heidelberg 2010.