840 resultados para McKinnon Dash and Hardware Company
Resumo:
The present paper is devoted to creation of cryptographic data security and realization of the packet mode in the distributed information measurement and control system that implements methods of optical spectroscopy for plasma physics research and atomic collisions. This system gives a remote access to information and instrument resources within the Intranet/Internet networks. The system provides remote access to information and hardware resources for the natural sciences within the Intranet/Internet networks. The access to physical equipment is realized through the standard interface servers (PXI, CАМАC, and GPIB), the server providing access to Ethernet devices, and the communication server, which integrates the equipment servers into a uniform information system. The system is used to make research task in optical spectroscopy, as well as to support the process of education at the Department of Physics and Engineering of Petrozavodsk State University.
Resumo:
Various digital watermarking (WM) techniques for still imaging have been studied in the last several years. Recently, many new WM schemes have been proposed for other types of digital multimedia data, such as text, audio and video. This paper presents a brief overview of existing digital video WM. We classify WM techniques and discuss the properties of video WM. Since each WM application has its own specific requirements, WM design must take the intended application into consideration. Video WM applications are also discussed in the paper. The features of video WM implementations in software and hardware and their differences are presented through the description of four examples of existing work.
Resumo:
BACKGROUND: Standardised packaging (SP) of tobacco products is an innovative tobacco control measure opposed by transnational tobacco companies (TTCs) whose responses to the UK government's public consultation on SP argued that evidence was inadequate to support implementing the measure. The government's initial decision, announced 11 months after the consultation closed, was to wait for 'more evidence', but four months later a second 'independent review' was launched. In view of the centrality of evidence to debates over SP and TTCs' history of denying harms and manufacturing uncertainty about scientific evidence, we analysed their submissions to examine how they used evidence to oppose SP. METHODS AND FINDINGS: We purposively selected and analysed two TTC submissions using a verification-oriented cross-documentary method to ascertain how published studies were used and interpretive analysis with a constructivist grounded theory approach to examine the conceptual significance of TTC critiques. The companies' overall argument was that the SP evidence base was seriously flawed and did not warrant the introduction of SP. However, this argument was underpinned by three complementary techniques that misrepresented the evidence base. First, published studies were repeatedly misquoted, distorting the main messages. Second, 'mimicked scientific critique' was used to undermine evidence; this form of critique insisted on methodological perfection, rejected methodological pluralism, adopted a litigation (not scientific) model, and was not rigorous. Third, TTCs engaged in 'evidential landscaping', promoting a parallel evidence base to deflect attention from SP and excluding company-held evidence relevant to SP. The study's sample was limited to sub-sections of two out of four submissions, but leaked industry documents suggest at least one other company used a similar approach. CONCLUSIONS: The TTCs' claim that SP will not lead to public health benefits is largely without foundation. The tools of Better Regulation, particularly stakeholder consultation, provide an opportunity for highly resourced corporations to slow, weaken, or prevent public health policies.
Resumo:
To explore the views of pharmacy and rheumatology stakeholders about system-related barriers to medicines optimisation activities with young people with long-term conditions. A three-phase consensus-building study comprising (1) focus groups with community and hospital pharmacists; (2) semi-structured telephone interviews with lay and professional adolescent rheumatology stakeholders and pharmacy policymakers, and (3) multidisciplinary discussion groups with community and hospital pharmacists and rheumatology staff. Qualitative verbatim transcripts from phases 1 and 2 were subjected to framework analysis. Themes from phase 1 underpinned a briefing for phase 2 interviewees. Themes from phases 1 and 2 generated elements of good pharmacy practice and current/future pharmacy roles for ranking in phase 3. Results from phase 3 prioritisation and ranking exercises were captured on self-completion data collection forms, entered into an Excel spreadsheet and subjected to descriptive statistical analysis. Institutional ethical approval was given by Aston University Health and Life Sciences Research Ethics Committee. Four focus groups were conducted with 18 pharmacists across England, Scotland and Wales (7 hospital, 10 community and 1 community/public health). Fifteen stakeholders took part in telephone interviews (3 pharmacist commissioners; 2 pharmacist policymakers; 2 pharmacy staff members (1 community and 1 hospital); 4 rheumatologists; 1 specialist nurse, and 3 lay juvenile arthritis advocates). Twenty-five participants took part in three discussion groups in adolescent rheumatology centres across England and Scotland (9 community pharmacists; 4 hospital pharmacists; 6 rheumatologists; 5 specialist nurses, and 1 physiotherapist). In all phases of the study, system-level issues were acknowledged as barriers to more engagement with young people and families. Community pharmacists in the focus groups reported that opportunities for engaging with young people were low if parents collected prescriptions alone, which was agreed by other stakeholders. Moreover, institutional/company prescription collection policies – an activity largely disallowed for a young person under 16 without an accompanying parent - were identified by hospital and community pharmacists as barriers to open discussion and engagement. Few community pharmacists reported using Medicines Use Review (England/Wales) or Chronic Medication Service (Scotland) as a medicines optimisation activity with young people; many were unsure about consent procedures. Despite these limitations, rheumatology stakeholders ranked highly the potential of pharmacists empowering young people with general health care skills, such as repeat prescription ordering. The pharmacy profession lacks vision for its role in the care of young people with long-term conditions. Pharmacists and rheumatology stakeholders identified system-level barriers to more engagement with young people who take medicines regularly. We acknowledge that the modest number of participants may have had a specific interest and thus bias for the topic, but this underscores their frank admission of the challenges. Professional guidance and policy, practice frameworks and institutional/company policies must promote flexibility for pharmacy staff to recognise and empower young people who are able to give consent and take responsibility for medicines activities. This will increase mutual confidence and trust, and foster pharmacy’s role in teaching general health care skills. In this way, pharmacists will be able to build long-term relationships with young people and families.
Resumo:
Three new technologies have been brought together to develop a miniaturized radiation monitoring system. The research involved (1) Investigation a new HgI$\sb2$ detector. (2) VHDL modeling. (3) FPGA implementation. (4) In-circuit Verification. The packages used included an EG&G's crystal(HgI$\sb2$) manufactured at zero gravity, the Viewlogic's VHDL and Synthesis, Xilinx's technology library, its FPGA implementation tool, and a high density device (XC4003A). The results show: (1) Reduced cycle-time between Design and Hardware implementation; (2) Unlimited Re-design and implementation using the static RAM technology; (3) Customer based design, verification, and system construction; (4) Well suited for intelligent systems. These advantages excelled conventional chip design technologies and methods in easiness, short cycle time, and price in medium sized VLSI applications. It is also expected that the density of these devices will improve radically in the near future. ^
Resumo:
With advances in science and technology, computing and business intelligence (BI) systems are steadily becoming more complex with an increasing variety of heterogeneous software and hardware components. They are thus becoming progressively more difficult to monitor, manage and maintain. Traditional approaches to system management have largely relied on domain experts through a knowledge acquisition process that translates domain knowledge into operating rules and policies. It is widely acknowledged as a cumbersome, labor intensive, and error prone process, besides being difficult to keep up with the rapidly changing environments. In addition, many traditional business systems deliver primarily pre-defined historic metrics for a long-term strategic or mid-term tactical analysis, and lack the necessary flexibility to support evolving metrics or data collection for real-time operational analysis. There is thus a pressing need for automatic and efficient approaches to monitor and manage complex computing and BI systems. To realize the goal of autonomic management and enable self-management capabilities, we propose to mine system historical log data generated by computing and BI systems, and automatically extract actionable patterns from this data. This dissertation focuses on the development of different data mining techniques to extract actionable patterns from various types of log data in computing and BI systems. Four key problems—Log data categorization and event summarization, Leading indicator identification , Pattern prioritization by exploring the link structures , and Tensor model for three-way log data are studied. Case studies and comprehensive experiments on real application scenarios and datasets are conducted to show the effectiveness of our proposed approaches.
Design optimization of modern machine drive systems for maximum fault tolerant and optimal operation
Resumo:
Modern electric machine drives, particularly three phase permanent magnet machine drive systems represent an indispensable part of high power density products. Such products include; hybrid electric vehicles, large propulsion systems, and automation products. Reliability and cost of these products are directly related to the reliability and cost of these systems. The compatibility of the electric machine and its drive system for optimal cost and operation has been a large challenge in industrial applications. The main objective of this dissertation is to find a design and control scheme for the best compromise between the reliability and optimality of the electric machine-drive system. The effort presented here is motivated by the need to find new techniques to connect the design and control of electric machines and drive systems. ^ A highly accurate and computationally efficient modeling process was developed to monitor the magnetic, thermal, and electrical aspects of the electric machine in its operational environments. The modeling process was also utilized in the design process in form finite element based optimization process. It was also used in hardware in the loop finite element based optimization process. The modeling process was later employed in the design of a very accurate and highly efficient physics-based customized observers that are required for the fault diagnosis as well the sensorless rotor position estimation. Two test setups with different ratings and topologies were numerically and experimentally tested to verify the effectiveness of the proposed techniques. ^ The modeling process was also employed in the real-time demagnetization control of the machine. Various real-time scenarios were successfully verified. It was shown that this process gives the potential to optimally redefine the assumptions in sizing the permanent magnets of the machine and DC bus voltage of the drive for the worst operating conditions. ^ The mathematical development and stability criteria of the physics-based modeling of the machine, design optimization, and the physics-based fault diagnosis and the physics-based sensorless technique are described in detail. ^ To investigate the performance of the developed design test-bed, software and hardware setups were constructed first. Several topologies of the permanent magnet machine were optimized inside the optimization test-bed. To investigate the performance of the developed sensorless control, a test-bed including a 0.25 (kW) surface mounted permanent magnet synchronous machine example was created. The verification of the proposed technique in a range from medium to very low speed, effectively show the intelligent design capability of the proposed system. Additionally, to investigate the performance of the developed fault diagnosis system, a test-bed including a 0.8 (kW) surface mounted permanent magnet synchronous machine example with trapezoidal back electromotive force was created. The results verify the use of the proposed technique under dynamic eccentricity, DC bus voltage variations, and harmonic loading condition make the system an ideal case for propulsion systems.^
Resumo:
Today, modern System-on-a-Chip (SoC) systems have grown rapidly due to the increased processing power, while maintaining the size of the hardware circuit. The number of transistors on a chip continues to increase, but current SoC designs may not be able to exploit the potential performance, especially with energy consumption and chip area becoming two major concerns. Traditional SoC designs usually separate software and hardware. Thus, the process of improving the system performance is a complicated task for both software and hardware designers. The aim of this research is to develop hardware acceleration workflow for software applications. Thus, system performance can be improved with constraints of energy consumption and on-chip resource costs. The characteristics of software applications can be identified by using profiling tools. Hardware acceleration can have significant performance improvement for highly mathematical calculations or repeated functions. The performance of SoC systems can then be improved, if the hardware acceleration method is used to accelerate the element that incurs performance overheads. The concepts mentioned in this study can be easily applied to a variety of sophisticated software applications. The contributions of SoC-based hardware acceleration in the hardware-software co-design platform include the following: (1) Software profiling methods are applied to H.264 Coder-Decoder (CODEC) core. The hotspot function of aimed application is identified by using critical attributes such as cycles per loop, loop rounds, etc. (2) Hardware acceleration method based on Field-Programmable Gate Array (FPGA) is used to resolve system bottlenecks and improve system performance. The identified hotspot function is then converted to a hardware accelerator and mapped onto the hardware platform. Two types of hardware acceleration methods – central bus design and co-processor design, are implemented for comparison in the proposed architecture. (3) System specifications, such as performance, energy consumption, and resource costs, are measured and analyzed. The trade-off of these three factors is compared and balanced. Different hardware accelerators are implemented and evaluated based on system requirements. 4) The system verification platform is designed based on Integrated Circuit (IC) workflow. Hardware optimization techniques are used for higher performance and less resource costs. Experimental results show that the proposed hardware acceleration workflow for software applications is an efficient technique. The system can reach 2.8X performance improvements and save 31.84% energy consumption by applying the Bus-IP design. The Co-processor design can have 7.9X performance and save 75.85% energy consumption.
Resumo:
The increasing needs for computational power in areas such as weather simulation, genomics or Internet applications have led to sharing of geographically distributed and heterogeneous resources from commercial data centers and scientific institutions. Research in the areas of utility, grid and cloud computing, together with improvements in network and hardware virtualization has resulted in methods to locate and use resources to rapidly provision virtual environments in a flexible manner, while lowering costs for consumers and providers. ^ However, there is still a lack of methodologies to enable efficient and seamless sharing of resources among institutions. In this work, we concentrate in the problem of executing parallel scientific applications across distributed resources belonging to separate organizations. Our approach can be divided in three main points. First, we define and implement an interoperable grid protocol to distribute job workloads among partners with different middleware and execution resources. Second, we research and implement different policies for virtual resource provisioning and job-to-resource allocation, taking advantage of their cooperation to improve execution cost and performance. Third, we explore the consequences of on-demand provisioning and allocation in the problem of site-selection for the execution of parallel workloads, and propose new strategies to reduce job slowdown and overall cost.^
Resumo:
The effects of joint hardware impairments on the performance of fixed gain amplify-and-forward (AF) relaying are studied. By considering IQ imbalance at the source and destination and the nonlinear relay the outage probability over Nakagami-m fading channels is derived, and the effects of fading and hardware impairments on the system are analysed. The analytical results are verified by Monte Carlo simulations.
Resumo:
Computers have invaded our offices, our homes, cars and coffee-pots; they have become ubiquitous. However, the advance of computing technologies is associated with an increasing lack of “visibility” of the underlying software and hardware technologies. While we use and accept the computer, we neither know its history nor functionality. In this paper, we argue that this is not a healthy situation. Also, recruitment onto UK Computing degree courses is steadily falling; these courses are appearing less attractive to school-leavers. This may be associated with the increasing ubiquity. In this paper we reflect on an MSc. module of instruction, Concepts and Philosophy of Computing, and a BSc. module Computer Games Development developed at the University of Worcester which address these issues. We propose that the elements of these modules form a necessary part of the education of all citizens, and we suggest how this may be realized. We also suggest how to re-enthuse our youth about computing as a discipline and halt the drop in recruitment.
Resumo:
Computers have invaded our offices, our homes, cars and coffee-pots; they have become ubiquitous. However, the advance of computing technologies is associated with an increasing lack of “visibility” of the underlying software and hardware technologies. While we use and accept the computer, we neither know its history nor functionality. In this paper, we argue that this is not a healthy situation. Also, recruitment onto UK Computing degree courses is steadily falling; these courses are appearing less attractive to school-leavers. This may be associated with the increasing ubiquity. In this paper we reflect on an MSc. module of instruction, Concepts and Philosophy of Computing, and a BSc. module Computer Games Development developed at the University of Worcester which address these issues. We propose that the elements of these modules form a necessary part of the education of all citizens, and we suggest how this may be realized. We also suggest how to re-enthuse our youth about computing as a discipline and halt the drop in recruitment.
Resumo:
Developing a theoretical framework for pervasive information environments is an enormous goal. This paper aims to provide a small step towards such a goal. The following pages report on our initial investigations to devise a framework that will continue to support locative, experiential and evaluative data from ‘user feedback’ in an increasingly pervasive information environment. We loosely attempt to outline this framework by developing a methodology capable of moving from rapid-deployment of software and hardware technologies, towards a goal of realistic immersive experience of pervasive information. We propose various technical solutions and address a range of problems such as; information capture through a novel model of sensing, processing, visualization and cognition.
Resumo:
Advances in FPGA technology and higher processing capabilities requirements have pushed to the emerge of All Programmable Systems-on-Chip, which incorporate a hard designed processing system and a programmable logic that enable the development of specialized computer systems for a wide range of practical applications, including data and signal processing, high performance computing, embedded systems, among many others. To give place to an infrastructure that is capable of using the benefits of such a reconfigurable system, the main goal of the thesis is to implement an infrastructure composed of hardware, software and network resources, that incorporates the necessary services for the operation, management and interface of peripherals, that coompose the basic building blocks for the execution of applications. The project will be developed using a chip from the Zynq-7000 All Programmable Systems-on-Chip family.
Resumo:
This project, realized at the company ABER Ltd, describes the process followed for the developing of an electronic control system for a hydraulic elevator. The previous control system was based on relay logic, and the company wanted to change it to a microcontroller based technology. To do so, different approaches were studied and finally the selected technology for the development was the Raspberry Pi. After, the software needed for all the elevator types was developed, and the interface hardware was selected. In the end, several test were made to adjust the software and the hardware and to prove the good operation of the system.