23 resultados para Hardware and Architecture

em Aston University Research Archive


Relevância:

90.00% 90.00%

Publicador:

Resumo:

The integration of a microprocessor and a medium power stepper motor in one control system brings together two quite different disciplines. Various methods of interfacing are examined and the problems involved in both hardware and software manipulation are investigated. Microprocessor open-loop control of the stepper motor is considered. The possible advantages of microprocessor closed-loop control are examined and the development of a system is detailed. The system uses position feedback to initiate each motor step. Results of the dynamic response of the system are presented and its performance discussed. Applications of the static torque characteristic of the stepper motor are considered followed by a review of methods of predicting the characteristic. This shows that accurate results are possible only when the effects of magnetic saturation are avoided or when the machine is available for magnetic circuit tests to be carried out. A new method of predicting the static torque characteristic is explained in detail. The method described uses the machine geometry and the magnetic characteristics of the iron types used in the machine. From this information the permeance of each iron component of the machine is calculated and by using the equivalent magnetic circuit of the machine, the total torque produced is predicted. It is shown how this new method is implemented on a digital computer and how the model may be used to investigate further aspects of the stepper motor in addition to the static torque.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

The matched filter detector is well known as the optimum detector for use in communication, as well as in radar systems for signals corrupted by Additive White Gaussian Noise (A.W.G.N.). Non-coherent F.S.K. and differentially coherent P.S.K. (D.P.S.K.) detection schemes, which employ a new approach in realizing the matched filter processor, are investigated. The new approach utilizes pulse compression techniques, well known in radar systems, to facilitate the implementation of the matched filter in the form of the Pulse Compressor Matched Filter (P.C.M.F.). Both detection schemes feature a mixer- P.C.M.F. Compound as their predetector processor. The Compound is utilized to convert F.S.K. modulation into pulse position modulation, and P.S.K. modulation into pulse polarity modulation. The mechanisms of both detection schemes are studied through examining the properties of the Autocorrelation function (A.C.F.) at the output of the P.C.M.F.. The effects produced by time delay, and carrier interference on the output A.C.F. are determined. Work related to the F.S.K. detection scheme is mostly confined to verifying its validity, whereas the D.P.S.K. detection scheme has not been reported before. Consequently, an experimental system was constructed, which utilized combined hardware and software, and operated under the supervision of a microprocessor system. The experimental system was used to develop error-rate models for both detection schemes under investigation. Performances of both F. S. K. and D.P. S. K. detection schemes were established in the presence of A. W. G. N. , practical imperfections, time delay, and carrier interference. The results highlight the candidacy of both detection schemes for use in the field of digital data communication and, in particular, the D.P.S.K. detection scheme, which performed very close to optimum in a background of A.W.G.N.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

The 5-HT3 receptors are members of the cys-loop family of ligand-gated ion channels. Two functional subtypes are known, the homomeric 5HT3A and the heteromeric 5HT3A/B receptors, which exhibit distinct biophysical characteristics but are difficult to differentiate pharmacologically. Atomic force microscopy has been used to determine the stoichiometry and architecture of the heteromeric 5HT3A/B receptor. Each subunit was engineered to express a unique C-terminal epitope tag, together with six sequential histidine residues to facilitate nickel affinity purification. The 5-HT3 receptors, ectopically expressed in HEK293 cells, were solubilised, purified and decorated with antibodies to the subunit specific epitope tags. Imaging of individual receptors by atomic force microscopy revealed a pentameric arrangement of subunits in the order BBABA, reading anti-clockwise when viewed from the extracellular face. Homology models for the heteromeric receptor were then constructed using both the electron microscopic structure of the nicotinic acetylcholine receptor, from Torpedo marmorata, and the X-ray crystallographic structure of the soluble acetylcholine binding protein, from Lymnaea stagnalis, as templates. These homology models were used, together with equivalent models constructed for the homomeric receptor, to interpret mutagenesis experiments designed to explore the minimal recognition differences of both the natural agonist, 5-HT, and the competitive antagonist, granisetron, for the two human receptor subtypes. The results of this work revealed that the 5-HT3B subunit residues within the ligand binding site, for both the agonist and antagonist, are accommodating to conservative mutations. They are consistent with the view that the 5-HT3A subunit provides the principal and the 5-HT38 subunit the complementary recognition interactions at the binding interface.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

This work attempts to create a systemic design framework for man-machine interfaces which is self consistent, compatible with other concepts, and applicable to real situations. This is tackled by examining the current architecture of computer applications packages. The treatment in the main is philosophical and theoretical and analyses the origins, assumptions and current practice of the design of applications packages. It proposes that the present form of packages is fundamentally contradictory to the notion of packaging itself. This is because as an indivisible ready-to-implement solution, current package architecture displays the following major disadvantages. First, it creates problems as a result of user-package interactions, in which the designer tries to mould all potential individual users, no matter how diverse they are, into one model. This is worsened by the minute provision, if any, of important properties such as flexibility, independence and impartiality. Second, it displays rigid structure that reduces the variety and/or multi-use of the component parts of such a package. Third, it dictates specific hardware and software configurations which probably results in reducing the number of degrees of freedom of its user. Fourth, it increases the dependence of its user upon its supplier through inadequate documentation and understanding of the package. Fifth, it tends to cause a degeneration of the expertise of design of the data processing practitioners. In view of this understanding an alternative methodological design framework which is both consistent with systems approach and the role of a package in its likely context is proposed. The proposition is based upon an extension of the identified concept of the hierarchy of holons* which facilitates the examination of the complex relationships of a package with its two principal environments. First, the user characteristics and his decision making practice and procedures; implying an examination of the user's M.I.S. network. Second, the software environment and its influence upon a package regarding support, control and operation of the package. The framework is built gradually as discussion advances around the central theme of a compatible M.I.S., software and model design. This leads to the formation of the alternative package architecture that is based upon the design of a number of independent, self-contained small parts. Such is believed to constitute the nucleus around which not only packages can be more effectively designed, but is also applicable to many man-machine systems design.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

The fossil arthropod Class Trilobita is characterised by the possession of a highly mineralised dorsal exoskeleton with an incurved marginal flange (doublure). This cuticle is usually the only part of the organism to be preserved. Despite the common occurrence of trilobites in Palaeozoic sediments, the original exoskeletal mineralogy has not been determined previously. Petrographic data involving over seventy trilobite species, ranging in age from Cambrian to Devonian, together with atomic absorption and stable isotope analyses, indicate a primary low-magnesian calcite composition. Trilobite cuticles exhibit a variety of preservational textures which are related to the different diagenetic realms through which they have passed. A greater knowledge of post-depositional processes and the specific features they produce, has enabled post-mortem artefacts to be distinguished from primary cuticular microstructures. Alterations of the cuticle can either enhance or destroy primary features, and their effects are best observed in thin-sections, both under transmitted light and cathodoluminescence. Well-preserved trilobites often retain primary microstructures such as laminations, canals, and tubercles. These have been examined in stained thin-sections and by scanning electron microscopy, from as wide a range of trilobites as possible. Construction of sensory field maps has shown that although the basic organisation of the exoskeleton is the same in all trilobites, the types of microstructures found, and their distribution is species-specific. The composition, microstructure, and architecture of the trilobite exoskeleton have also been studied from a biomechanical viewpoint. Total cuticle thickness, and the relative proportions of the different layers, together with the overall architecture all affected the mechanical properties of the exoskeleton.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

This thesis describes an investigation of methods by which both repetitive and non-repetitive electrical transients in an HVDC converter station may be controlled for minimum overall cost. Several methods of inrush control are proposed and studied. The preferred method, whose development is reported in this thesis, would utilize two magnetic materials, one of which is assumed to be lossless and the other has controlled eddy-current losses. Mathematical studies are performed to assess the optimum characteristics of these materials, such that inrush current is suitably controlled for a minimum saturation flux requirement. Subsequent evaluation of the cost of hardware and capitalized losses of the proposed inrush control, indicate that a cost reduction of approximately 50% is achieved, in comparison with the inrush control hardware for the Sellindge converter station. Further mathematical studies are carried out to prove the adequacy of the proposed inrush control characteristics for controlling voltage and current transients during both repetitive and non-repetitive operating conditions. The results of these proving studies indicate that no change in the proposed characteristics is required to ensure that integrity of the thyristors is maintained.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Information technology is at the centre of today’s business environment. The increasing importance of e-commerce and the integration of information systems in all areas of a business means it is crucial for managers to understand and implement IS (information systems). This major text, now in its second edition, provides the skills and knowledge necessary to choose the right systems, and to develop and manage them effectively. Business Information Systems: Technology, Development and Management assumes no prior knowledge of IS or IT, and emphasises the importance of IS to management decision making. It takes a 3 part structure: Part One covers hardware and software technologies; Part Two looks at information systems analysis and design; and Part Three describes the strategic management of IS. This successful format allows each section to be studied alongside individual modules, and enables students to focus clearly on specific areas and use the book for more than one course. This book is suitable for college students, undergraduate degree and postgraduate students taking courses with modules in the practical IT skills of selection, implementation, management and use of BIS. The practical sections are also of use to managers in industry involved in the development and use of IS.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

As a new medium for questionnaire delivery, the internet has the potential to revolutionise the survey process. Online (web-based) questionnaires provide several advantages over traditional survey methods in terms of cost, speed, appearance, flexibility, functionality, and usability [1, 2]. For instance, delivery is faster, responses are received more quickly, and data collection can be automated or accelerated [1- 3]. Online-questionnaires can also provide many capabilities not found in traditional paper-based questionnaires: they can include pop-up instructions and error messages; they can incorporate links; and it is possible to encode difficult skip patterns making such patterns virtually invisible to respondents. Like many new technologies, however, online-questionnaires face criticism despite their advantages. Typically, such criticisms focus on the vulnerability of online-questionnaires to the four standard survey error types: namely, coverage, non-response, sampling, and measurement errors. Although, like all survey errors, coverage error (“the result of not allowing all members of the survey population to have an equal or nonzero chance of being sampled for participation in a survey” [2, pg. 9]) also affects traditional survey methods, it is currently exacerbated in online-questionnaires as a result of the digital divide. That said, many developed countries have reported substantial increases in computer and internet access and/or are targeting this as part of their immediate infrastructural development [4, 5]. Indicating that familiarity with information technologies is increasing, these trends suggest that coverage error will rapidly diminish to an acceptable level (for the developed world at least) in the near future, and in so doing, positively reinforce the advantages of online-questionnaire delivery. The second error type – the non-response error – occurs when individuals fail to respond to the invitation to participate in a survey or abandon a questionnaire before it is completed. Given today’s societal trend towards self-administration [2] the former is inevitable, irrespective of delivery mechanism. Conversely, non-response as a consequence of questionnaire abandonment can be relatively easily addressed. Unlike traditional questionnaires, the delivery mechanism for online-questionnaires makes estimation of questionnaire length and time required for completion difficult1, thus increasing the likelihood of abandonment. By incorporating a range of features into the design of an online questionnaire, it is possible to facilitate such estimation – and indeed, to provide respondents with context sensitive assistance during the response process – and thereby reduce abandonment while eliciting feelings of accomplishment [6]. For online-questionnaires, sampling error (“the result of attempting to survey only some, and not all, of the units in the survey population” [2, pg. 9]) can arise when all but a small portion of the anticipated respondent set is alienated (and so fails to respond) as a result of, for example, disregard for varying connection speeds, bandwidth limitations, browser configurations, monitors, hardware, and user requirements during the questionnaire design process. Similarly, measurement errors (“the result of poor question wording or questions being presented in such a way that inaccurate or uninterpretable answers are obtained” [2, pg. 11]) will lead to respondents becoming confused and frustrated. Sampling, measurement, and non-response errors are likely to occur when an online-questionnaire is poorly designed. Individuals will answer questions incorrectly, abandon questionnaires, and may ultimately refuse to participate in future surveys; thus, the benefit of online questionnaire delivery will not be fully realized. To prevent errors of this kind2, and their consequences, it is extremely important that practical, comprehensive guidelines exist for the design of online questionnaires. Many design guidelines exist for paper-based questionnaire design (e.g. [7-14]); the same is not true for the design of online questionnaires [2, 15, 16]. The research presented in this paper is a first attempt to address this discrepancy. Section 2 describes the derivation of a comprehensive set of guidelines for the design of online-questionnaires and briefly (given space restrictions) outlines the essence of the guidelines themselves. Although online-questionnaires reduce traditional delivery costs (e.g. paper, mail out, and data entry), set up costs can be high given the need to either adopt and acquire training in questionnaire development software or secure the services of a web developer. Neither approach, however, guarantees a good questionnaire (often because the person designing the questionnaire lacks relevant knowledge in questionnaire design). Drawing on existing software evaluation techniques [17, 18], we assessed the extent to which current questionnaire development applications support our guidelines; Section 3 describes the framework used for the evaluation, and Section 4 discusses our findings. Finally, Section 5 concludes with a discussion of further work.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Changes in the international economic scenario in recent years have made it necessary for both industrial and service firms to reformulate their strategies, with a strong focus on the resources required for successful implementation. In this scenario, information and communication technologies (ICT) has a potentially vital role to play both as a key resource for re-engineering business processes within a framework of direct connection between suppliers and customers, and as a source of cost optimisation. There have also been innovations in the logistics and freight transport industry in relation to ICT diffusion. The implementation of such systems by third party logistics providers (3PL) allows the real-time exchange of information between supply chain partners, thereby improving planning capability and customer service. Unlike other industries, the logistics and freight transport industry is lagging somewhat behind other sectors in ICT diffusion. This situation is to be attributed to a series of both industry-specific and other factors, such as: (a) traditional resistance to change on the part of transport and logistics service providers; (b) the small size of firms that places considerable constraints upon investment in ICT; (c) the relative shortage of user-friendly applications; (d) the diffusion of internal standards on the part of the main providers in the industry whose aim is to protect company information, preventing its dissemination among customers and suppliers; (e) the insufficient degree of professional skills for using such technologies on the part of staff in such firms. The latter point is of critical importance insofar as the adoption of ICT is making it increasingly necessary both to develop new technical skills to use different hardware and new software tools, and to be able to plan processes of communication so as to allow the optimal use of ICT. The aim of this paper is to assess the impact of ICT on transport and logistics industry and to highlight how the use of such new technologies is affecting providers' training needs. The first part will provide a conceptual framework of the impact of ICT on the transport and logistics industry. In the second part the state of ICT dissemination in the Italian and Irish third party logistics industry will be outlined. In the third part, the impact of ICT on the training needs of transport and logistics service providers - based on case studies in both countries - are discussed. The implications of the foregoing for the development of appropriate training policies are considered. For the covering abstract see ITRD E126595.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Cloud computing is a new technological paradigm offering computing infrastructure, software and platforms as a pay-as-you-go, subscription-based service. Many potential customers of cloud services require essential cost assessments to be undertaken before transitioning to the cloud. Current assessment techniques are imprecise as they rely on simplified specifications of resource requirements that fail to account for probabilistic variations in usage. In this paper, we address these problems and propose a new probabilistic pattern modelling (PPM) approach to cloud costing and resource usage verification. Our approach is based on a concise expression of probabilistic resource usage patterns translated to Markov decision processes (MDPs). Key costing and usage queries are identified and expressed in a probabilistic variant of temporal logic and calculated to a high degree of precision using quantitative verification techniques. The PPM cost assessment approach has been implemented as a Java library and validated with a case study and scalability experiments. © 2012 Springer-Verlag Berlin Heidelberg.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

This paper proposes an in situ diagnostic and prognostic (D&P) technology to monitor the health condition of insulated gate bipolar transistors (IGBTs) used in EVs with a focus on the IGBTs' solder layer fatigue. IGBTs' thermal impedance and the junction temperature can be used as health indicators for through-life condition monitoring (CM) where the terminal characteristics are measured and the devices' internal temperature-sensitive parameters are employed as temperature sensors to estimate the junction temperature. An auxiliary power supply unit, which can be converted from the battery's 12-V dc supply, provides power to the in situ test circuits and CM data can be stored in the on-board data-logger for further offline analysis. The proposed method is experimentally validated on the developed test circuitry and also compared with finite-element thermoelectrical simulation. The test results from thermal cycling are also compared with acoustic microscope and thermal images. The developed circuitry is proved to be effective to detect solder fatigue while each IGBT in the converter can be examined sequentially during red-light stopping or services. The D&P circuitry can utilize existing on-board hardware and be embedded in the IGBT's gate drive unit.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The development of an information system in Caribbean public sector organisations is usually seen as a matter of installing hardware and software according to a directive from senior management, without much planning. This causes huge investment in procuring hardware and software without improving overall system performance. Increasingly, Caribbean organisations are looking for assurances on information system performance before making investment decisions not only to satisfy the funding agencies, but also to be competitive in this dynamic and global business world. This study demonstrates an information system planning approach using a process-reengineering framework. Firstly, the stakeholders for the business functions are identified along with their relationships and requirements. Secondly, process reengineering is carried out to develop the system requirements. Accordingly, information technology is selected through detailed system requirement analysis. Thirdly, cost-benefit analysis, identification of critical success factors and risk analysis are carried out to strengthen the selection. The entire methodology has been demonstrated through an information system project in the Barbados drug service, a public sector organisation in the Caribbean.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Spread spectrum systems make use of radio frequency bandwidths which far exceed the minimum bandwidth necessary to transmit the basic message information.These systems are designed to provide satisfactory communication of the message information under difficult transmission conditions. Frequency-hopped multilevel frequency shift keying (FH-MFSK) is one of the many techniques used in spread spectrum systems. It is a combination of frequency hopping and time hopping. In this system many users share a common frequency band using code division multiplexing. Each user is assigned an address and the message is modulated into the address. The receiver, knowing the address, decodes the received signal and extracts the message. This technique is suggested for digital mobile telephony. This thesis is concerned with an investigation of the possibility of utilising FH-MFSK for data transmission corrupted by additive white gaussian noise (A.W.G.N.). Work related to FH-MFSK has so far been mostly confined to its validity, and its performance in the presence of A.W.G.N. has not been reported before. An experimental system was therefore constructed which utilised combined hardware and software and operated under the supervision of a microprocessor system. The experimental system was used to develop an error-rate model for the system under investigation. The performance of FH-MFSK for data transmission was established in the presence of A.W.G.N. and with deleted and delayed sample effects. Its capability for multiuser applications was determined theoretically. The results show that FH-MFSK is a suitable technique for data transmission in the presence of A.W.G.N.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The research, which was given the terms of reference, "To cut the lead time for getting new products into volume production", was sponsored by a company which develops and manufactures telecommunications equipment. The research described was based on studies made of the development of two processors which were designed to control telephone exchanges in the public network. It was shown that for each of these products, which were large electronic systems containing both hardware and software, most of their lead time was taken up with development. About half of this time was consumed by activities associated with redesign resulting from changes found to be necessary after the original design had been built. Analysing the causes of design changes showed the most significant to be Design Faults. The reasons why these predominated were investigated by seeking the collective opinion from design staff and their management using a questionnaire. Using the results from these studies to build upon the works of other authors, a model of the development process of large hierarchical systems is derived. An important feature of this model is its representation of iterative loops due to design changes. In order to reduce the development time, two closely related philosophies are proposed: By spending more time at the early stages of development (detecting and remedying faults in the design) even greater savings can be made later on, The collective performance of the development organisation would be improved by increasing the amount and speed of feedback about that performance. A trial was performed to test these philosophies using readily available techniques for design verification. It showed that about an 11 per cent saving would be made on the development time and that the philosophies might be equally successfully applied to other products and techniques.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

This thesis describes work undertaken in order to fulfil a need experienced in the Department of Educational Enquiry at the University of Aston in Birmingham for speech analysis facilities suitable for use in teaching and research work within the Department. The hardware and software developed during the research project provides displays of speech fundamental frequency and intensity in real time. The system is suitable for the provision of visual feedback of these parameters of a subject's speech in a learning situation, and overcomes the inadequacies of equipment currently used for this task in that it provides a clear indication of fundamental frequency contours as the subject is speaking. The thesis considers the use of such equipment in several related fields, and the approaches that have been reported to one of the major problems of speech analysis, namely pitch-period estimation. A number of different systems are described, and their suitability for the present purposes is discussed. Finally, a novel method of pitch-period estimation is developed, and a speech analysis system incorporating this method is described. Comparison is made between the results produced by this system and those produced by a conventional speech spectrograph.