915 resultados para drives
Resumo:
It is often postulated that an increased hip to shoulder differential angle (`X-Factor') during the early downswing better utilises the stretch-shorten cycle and improves golf performance. The current study aims to examine the potential relationship between the X-Factor and performance during the tee-shot. Seven golfers with handicaps between 0 and 10 strokes comprised the low-handicap group, whilst the high-handicap group consisted of eight golfers with handicaps between 11 and 20 strokes. The golfers performed 20 drives and three-dimensional kinematic data were used to quantify hip and shoulder rotation and the subsequent X-Factor. Compared with the low-handicap group, the high-handicap golfers tended to demonstrate greater hip rotation at the top of the backswing and recorded reduced maximum X-Factor values. The inconsistencies evident in the literature may suggest that a universal method of measuring rotational angles during the golf swing would be beneficial for future studies, particularly when considering potential injury.
Resumo:
This article considers what drives donors to leave charitable bequests. Building on theories of charitable bequest giving, we consider two types of motivations for leaving a bequest: attitudinal and structural motivations. Using unique Australian data, we show that a strong belief in the efficacy of charitable organisations has a significant positive effect on the likelihood of leaving a bequest, as does past giving behaviour and having no children. As bequests constitute an important income stream for charitable organisations, this research can help fundraisers better target their marketing strategies towards those most likely to plan their estates and motivate these people to make bequests.
Resumo:
Computer forensics is the process of gathering and analysing evidence from computer systems to aid in the investigation of a crime. Typically, such investigations are undertaken by human forensic examiners using purpose-built software to discover evidence from a computer disk. This process is a manual one, and the time it takes for a forensic examiner to conduct such an investigation is proportional to the storage capacity of the computer's disk drives. The heterogeneity and complexity of various data formats stored on modern computer systems compounds the problems posed by the sheer volume of data. The decision to undertake a computer forensic examination of a computer system is a decision to commit significant quantities of a human examiner's time. Where there is no prior knowledge of the information contained on a computer system, this commitment of time and energy occurs with little idea of the potential benefit to the investigation. The key contribution of this research is the design and development of an automated process to describe a computer system and its activity for the purposes of a computer forensic investigation. The term proposed for this process is computer profiling. A model of a computer system and its activity has been developed over the course of this research. Using this model a computer system, which is the subj ect of investigation, can be automatically described in terms useful to a forensic investigator. The computer profiling process IS resilient to attempts to disguise malicious computer activity. This resilience is achieved by detecting inconsistencies in the information used to infer the apparent activity of the computer. The practicality of the computer profiling process has been demonstrated by a proof-of concept software implementation. The model and the prototype implementation utilising the model were tested with data from real computer systems. The resilience of the process to attempts to disguise malicious activity has also been demonstrated with practical experiments conducted with the same prototype software implementation.
Resumo:
In this paper, several aspects of high frequency related issues of modern AC motor drive systems, such as common mode voltage, shaft voltage and resultant bearing current and leakage currents, have been discussed. Conducted emission is a major problem in modern motor drives that produce undesirable effects on electronic devices. In modern power electronic systems, increasing power density and decreasing cost and size of system are market requirements. Switching losses, harmonics and EMI are the key factors which should be considered at the beginning stage of a design to optimise a drive system.
Resumo:
Credence goods markets are characterized by asymmetric information between sellers and consumers that may give rise to inefficiencies, such as under- and overtreatment or market break-down. We study in a large experiment with 936 participants the determinants for efficiency in credence goods markets. While theory predicts that either liability or verifiability yields efficiency, we find that liability has a crucial, but verifiability only a minor effect. Allowing sellers to build up reputation has little influence, as predicted. Seller competition drives down prices and yields maximal trade, but does not lead to higher efficiency as long as liability is violated.
Resumo:
Physical activity has the potential to modulate appetite control by improving the sensitivity of the physiological satiety signalling system, by adjusting macronutrient preferences or food choices and by altering the hedonic response to food. There is evidence for all these actions. Concerning the impact of physical activity on energy balance, there exists a belief that physical activity drives up hunger and increases food intake, thereby rendering it futile as a method of weight control.
Resumo:
Given the present worldwide epidemic of obesity, it is pertinent to ask how effective exercise could be in helping people to lose weight or to prevent weight gain. There is a widely held belief that exercise is futile for weight reduction because any energy expended in exercise is automatically compensated for by a corresponding increase in energy intake (EI). In other words, exercise elevates the intensity of hunger and drives food consumption. This “commonsense” view appears to originate in an energy-balance model of appetite control, which stipulates that energy expended will drive EI as a consequence of the regulation of energy balance. However, it is very clear that EI (food consumption or eating) is not just a biological matter. Eating does not occur solely to rectify some internal need state. Indeed, an examination of the relation between exercise and appetite control has shown a very weak coupling; most studies have demonstrated that food intake does not immediately rise after exercise, even after very high energy expenditure (EE).[1] The processes of exercise-induced EE and food consumption do not appear to be tightly linked. After exercise, there is only slow and partial compensation for the energy expended. Therefore, exercise can be very useful in helping to bring about weight loss and is even more important in preventing weight gain or weight regain. This editorial explores this issue.
Resumo:
Objective: The evidence was reviewed on how physical activity could influence the regulation of food intake by either adjusting the sensitivity of appetite control mechanisms or by generating an energy deficit that could adjust the drive to eat. Design: Interventionist and correlational studies that had a significant influence on the relationship between physical activity and food intake were reviewed. Interventionist studies involve a deliberate imposition of physical activity with subsequent monitoring of the eating response. Correlational studies make use of naturally occurring differences in the levels of physical activity (between and within subjects) with simultaneous assessment of energy expenditure and intake. Subjects: Studies using lean, overweight, and obese men and women were included. Results: Only 19% of interventionist studies report an increase in energy intake after exercise; 65% show no change and 16% show a decrease in appetite. Of the correlational studies, approximately half show no relationship between energy expenditure and intake. These data indicate a rather loose coupling between energy expenditure and intake. A common sense view is that exercise is futile as a form of weight control because the energy deficit drives a compensatory increase in food intake. However, evidence shows that this is not generally true. One positive aspect of this is that raising energy expenditure through physical activity (or maintaining an active life style) can cause weight loss or prevent weight gain. A negative feature is that when people become sedentary after a period of high activity, food intake is not “down-regulated” to balance a reduced energy expenditure. Conclusion: Evidence suggests that a high level of physical activity can aid weight control either by improving the matching of food intake to energy expenditure (regulation) or by raising expenditure so that it is difficult for people to eat themselves into a positive energy balance.
Resumo:
This thesis examines the theory of technological determinism, which espouses the view that technological change drives social change, through an analysis of the impact of new media on higher education models in the United States of America. In so doing, it explores the impacts of new media technologies on higher education, in particular, and society in general. The thesis reviews the theoretical shape of the discourse surrounding new media technologies before narrowing in on utopian claims about the impact of new media technologies on education. It tests these claims through a specific case study of higher education in the USA. The study investigates whether 'new' media technologies (eg the Internet) are resulting in new forms of higher education in the USA and whether the blurring of information and entertainment technologies has caused a similar blurring in education and entertainment providers. It uses primary data gathered by the author in a series of interviews with key education, industry and media representatives in North America in 1997. Chapter 2 looks at the literature and history surrounding several topics central to the thesis - the discourses of technological determinism, the history of technology use and adoption in education, and impacts of new media technologies on education. Chapter 3 presents the findings of the American case study on the relationship between media and higher education and Chapter 4 concludes and synthesises the investigation.
Resumo:
Neural networks (NNs) are discussed in connection with their possible use in induction machine drives. The mathematical model of the NN as well as a commonly used learning algorithm is presented. Possible applications of NNs to induction machine control are discussed. A simulation of an NN successfully identifying the nonlinear multivariable model of an induction-machine stator transfer function is presented. Previously published applications are discussed, and some possible future applications are proposed.
Resumo:
The design and implementation of a high-power (2 MW peak) vector control drive is described. The inverter switching frequency is low, resulting in high-harmonic-content current waveforms. A block diagram of the physical system is given, and each component is described in some detail. The problem of commanded slip noise sensitivity, inherent in high-power vector control drives, is discussed, and a solution is proposed. Results are given which demonstrate the successful functioning of the system
Resumo:
Probabilistic load flow techniques have been adopted in AC electrified railways to study the load demand under various train service conditions. This paper highlights the differences in probabilistic load flow analysis between the usual power systems and power supply systems in AC railways; discusses the possible difficulties in problem formulation and presents the link between train movement and the corresponding power demand for load flow calculation.
Resumo:
With the advances in computer hardware and software development techniques in the past 25 years, digital computer simulation of train movement and traction systems has been widely adopted as a standard computer-aided engineering tool [1] during the design and development stages of existing and new railway systems. Simulators of different approaches and scales are used extensively to investigate various kinds of system studies. Simulation is now proven to be the cheapest means to carry out performance predication and system behaviour characterisation. When computers were first used to study railway systems, they were mainly employed to perform repetitive but time-consuming computational tasks, such as matrix manipulations for power network solution and exhaustive searches for optimal braking trajectories. With only simple high-level programming languages available at the time, full advantage of the computing hardware could not be taken. Hence, structured simulations of the whole railway system were not very common. Most applications focused on isolated parts of the railway system. It is more appropriate to regard those applications as primarily mechanised calculations rather than simulations. However, a railway system consists of a number of subsystems, such as train movement, power supply and traction drives, which inevitably contains many complexities and diversities. These subsystems interact frequently with each other while the trains are moving; and they have their special features in different railway systems. To further complicate the simulation requirements, constraints like track geometry, speed restrictions and friction have to be considered, not to mention possible non-linearities and uncertainties in the system. In order to provide a comprehensive and accurate account of system behaviour through simulation, a large amount of data has to be organised systematically to ensure easy access and efficient representation; the interactions and relationships among the subsystems should be defined explicitly. These requirements call for sophisticated and effective simulation models for each component of the system. The software development techniques available nowadays allow the evolution of such simulation models. Not only can the applicability of the simulators be largely enhanced by advanced software design, maintainability and modularity for easy understanding and further development, and portability for various hardware platforms are also encouraged. The objective of this paper is to review the development of a number of approaches to simulation models. Attention is, in particular, given to models for train movement, power supply systems and traction drives. These models have been successfully used to enable various ‘what-if’ issues to be resolved effectively in a wide range of applications, such as speed profiles, energy consumption, run times etc.
Resumo:
Across Australia, construction and redevelopment of public infrastructure, continues to be a key factor in economic development. Within this context, road transport has been identified as key building block of Queensland‟s future prosperity. However, since the late twentieth century, there has been a shift away from delivery of large infrastructure, including road networks, exclusively by the state. Subsequently, a range of alternative models, have emerged in infrastructure project delivery. Among these, governance networks have become a widespread mechanism for planning and delivering infrastructure. However, despite substantial public investments in road infrastructure that are made through governance networks, little is known about how these networks engage with stakeholders who are potentially affected by road infrastructure projects. Although governance networks undertake management functions, it is unclear what drives stakeholder engagement within this networked environment and how stakeholder relationship management is operationalised. This paper proposes that network management functions undertaken by governance networks incorporate stakeholder engagement and that network managers play a key role in creating and sustaining connections between governance networks and their stakeholders Drawing on stakeholder theory and governance network theory, this paper contributes to the literature by showing that stakeholder engagement is embedded within network management and identifying the critical role of network managers in establishing and maintaining stakeholder engagement.
Resumo:
The present rate of technological advance continues to place significant demands on data storage devices. The sheer amount of digital data being generated each year along with consumer expectations, fuels these demands. At present, most digital data is stored magnetically, in the form of hard disk drives or on magnetic tape. The increase in areal density (AD) of magnetic hard disk drives over the past 50 years has been of the order of 100 million times, and current devices are storing data at ADs of the order of hundreds of gigabits per square inch. However, it has been known for some time that the progress in this form of data storage is approaching fundamental limits. The main limitation relates to the lower size limit that an individual bit can have for stable storage. Various techniques for overcoming these fundamental limits are currently the focus of considerable research effort. Most attempt to improve current data storage methods, or modify these slightly for higher density storage. Alternatively, three dimensional optical data storage is a promising field for the information storage needs of the future, offering very high density, high speed memory. There are two ways in which data may be recorded in a three dimensional optical medium; either bit-by-bit (similar in principle to an optical disc medium such as CD or DVD) or by using pages of bit data. Bit-by-bit techniques for three dimensional storage offer high density but are inherently slow due to the serial nature of data access. Page-based techniques, where a two-dimensional page of data bits is written in one write operation, can offer significantly higher data rates, due to their parallel nature. Holographic Data Storage (HDS) is one such page-oriented optical memory technique. This field of research has been active for several decades, but with few commercial products presently available. Another page-oriented optical memory technique involves recording pages of data as phase masks in a photorefractive medium. A photorefractive material is one by which the refractive index can be modified by light of the appropriate wavelength and intensity, and this property can be used to store information in these materials. In phase mask storage, two dimensional pages of data are recorded into a photorefractive crystal, as refractive index changes in the medium. A low-intensity readout beam propagating through the medium will have its intensity profile modified by these refractive index changes and a CCD camera can be used to monitor the readout beam, and thus read the stored data. The main aim of this research was to investigate data storage using phase masks in the photorefractive crystal, lithium niobate (LiNbO3). Firstly the experimental methods for storing the two dimensional pages of data (a set of vertical stripes of varying lengths) in the medium are presented. The laser beam used for writing, whose intensity profile is modified by an amplitudemask which contains a pattern of the information to be stored, illuminates the lithium niobate crystal and the photorefractive effect causes the patterns to be stored as refractive index changes in the medium. These patterns are read out non-destructively using a low intensity probe beam and a CCD camera. A common complication of information storage in photorefractive crystals is the issue of destructive readout. This is a problem particularly for holographic data storage, where the readout beam should be at the same wavelength as the beam used for writing. Since the charge carriers in the medium are still sensitive to the read light field, the readout beam erases the stored information. A method to avoid this is by using thermal fixing. Here the photorefractive medium is heated to temperatures above 150�C; this process forms an ionic grating in the medium. This ionic grating is insensitive to the readout beam and therefore the information is not erased during readout. A non-contact method for determining temperature change in a lithium niobate crystal is presented in this thesis. The temperature-dependent birefringent properties of the medium cause intensity oscillations to be observed for a beam propagating through the medium during a change in temperature. It is shown that each oscillation corresponds to a particular temperature change, and by counting the number of oscillations observed, the temperature change of the medium can be deduced. The presented technique for measuring temperature change could easily be applied to a situation where thermal fixing of data in a photorefractive medium is required. Furthermore, by using an expanded beam and monitoring the intensity oscillations over a wide region, it is shown that the temperature in various locations of the crystal can be monitored simultaneously. This technique could be used to deduce temperature gradients in the medium. It is shown that the three dimensional nature of the recording medium causes interesting degradation effects to occur when the patterns are written for a longer-than-optimal time. This degradation results in the splitting of the vertical stripes in the data pattern, and for long writing exposure times this process can result in the complete deterioration of the information in the medium. It is shown in that simply by using incoherent illumination, the original pattern can be recovered from the degraded state. The reason for the recovery is that the refractive index changes causing the degradation are of a smaller magnitude since they are induced by the write field components scattered from the written structures. During incoherent erasure, the lower magnitude refractive index changes are neutralised first, allowing the original pattern to be recovered. The degradation process is shown to be reversed during the recovery process, and a simple relationship is found relating the time at which particular features appear during degradation and recovery. A further outcome of this work is that the minimum stripe width of 30 ìm is required for accurate storage and recovery of the information in the medium, any size smaller than this results in incomplete recovery. The degradation and recovery process could be applied to an application in image scrambling or cryptography for optical information storage. A two dimensional numerical model based on the finite-difference beam propagation method (FD-BPM) is presented and used to gain insight into the pattern storage process. The model shows that the degradation of the patterns is due to the complicated path taken by the write beam as it propagates through the crystal, and in particular the scattering of this beam from the induced refractive index structures in the medium. The model indicates that the highest quality pattern storage would be achieved with a thin 0.5 mm medium; however this type of medium would also remove the degradation property of the patterns and the subsequent recovery process. To overcome the simplistic treatment of the refractive index change in the FD-BPM model, a fully three dimensional photorefractive model developed by Devaux is presented. This model shows significant insight into the pattern storage, particularly for the degradation and recovery process, and confirms the theory that the recovery of the degraded patterns is possible since the refractive index changes responsible for the degradation are of a smaller magnitude. Finally, detailed analysis of the pattern formation and degradation dynamics for periodic patterns of various periodicities is presented. It is shown that stripe widths in the write beam of greater than 150 ìm result in the formation of different types of refractive index changes, compared with the stripes of smaller widths. As a result, it is shown that the pattern storage method discussed in this thesis has an upper feature size limit of 150 ìm, for accurate and reliable pattern storage.