919 resultados para Very long path length


Relevância:

30.00% 30.00%

Publicador:

Resumo:

This thesis consists of a novel written with the express purpose of exploring what practices and strategies are most useful in writing novel-length fiction as well as an exegesis which discusses the process. By its very nature, an undergraduate degree in Creative Writing is broad and general in approach. The Creative Writing undergraduate is being trained to manage many and varying writing tasks but none of them larger than can be readily marked and assessed in class quantities. This does not prepare the writing graduate for the gargantuan task of managing a project as large as a single title novel which can be up to 100,000 words and often is more. This study explores the question of what writing tools and practices best equip an emerging writer to begin, write and manage a long narrative within a deadline.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This thesis aimed to investigate the way in which distance runners modulate their speed in an effort to understand the key processes and determinants of speed selection when encountering hills in natural outdoor environments. One factor which has limited the expansion of knowledge in this area has been a reliance on the motorized treadmill which constrains runners to constant speeds and gradients and only linear paths. Conversely, limits in the portability or storage capacity of available technology have restricted field research to brief durations and level courses. Therefore another aim of this thesis was to evaluate the capacity of lightweight, portable technology to measure running speed in outdoor undulating terrain. The first study of this thesis assessed the validity of a non-differential GPS to measure speed, displacement and position during human locomotion. Three healthy participants walked and ran over straight and curved courses for 59 and 34 trials respectively. A non-differential GPS receiver provided speed data by Doppler Shift and change in GPS position over time, which were compared with actual speeds determined by chronometry. Displacement data from the GPS were compared with a surveyed 100m section, while static positions were collected for 1 hour and compared with the known geodetic point. GPS speed values on the straight course were found to be closely correlated with actual speeds (Doppler shift: r = 0.9994, p < 0.001, Δ GPS position/time: r = 0.9984, p < 0.001). Actual speed errors were lowest using the Doppler shift method (90.8% of values within ± 0.1 m.sec -1). Speed was slightly underestimated on a curved path, though still highly correlated with actual speed (Doppler shift: r = 0.9985, p < 0.001, Δ GPS distance/time: r = 0.9973, p < 0.001). Distance measured by GPS was 100.46 ± 0.49m, while 86.5% of static points were within 1.5m of the actual geodetic point (mean error: 1.08 ± 0.34m, range 0.69-2.10m). Non-differential GPS demonstrated a highly accurate estimation of speed across a wide range of human locomotion velocities using only the raw signal data with a minimal decrease in accuracy around bends. This high level of resolution was matched by accurate displacement and position data. Coupled with reduced size, cost and ease of use, the use of a non-differential receiver offers a valid alternative to differential GPS in the study of overground locomotion. The second study of this dissertation examined speed regulation during overground running on a hilly course. Following an initial laboratory session to calculate physiological thresholds (VO2 max and ventilatory thresholds), eight experienced long distance runners completed a self- paced time trial over three laps of an outdoor course involving uphill, downhill and level sections. A portable gas analyser, GPS receiver and activity monitor were used to collect physiological, speed and stride frequency data. Participants ran 23% slower on uphills and 13.8% faster on downhills compared with level sections. Speeds on level sections were significantly different for 78.4 ± 7.0 seconds following an uphill and 23.6 ± 2.2 seconds following a downhill. Speed changes were primarily regulated by stride length which was 20.5% shorter uphill and 16.2% longer downhill, while stride frequency was relatively stable. Oxygen consumption averaged 100.4% of runner’s individual ventilatory thresholds on uphills, 78.9% on downhills and 89.3% on level sections. Group level speed was highly predicted using a modified gradient factor (r2 = 0.89). Individuals adopted distinct pacing strategies, both across laps and as a function of gradient. Speed was best predicted using a weighted factor to account for prior and current gradients. Oxygen consumption (VO2) limited runner’s speeds only on uphill sections, and was maintained in line with individual ventilatory thresholds. Running speed showed larger individual variation on downhill sections, while speed on the level was systematically influenced by the preceding gradient. Runners who varied their pace more as a function of gradient showed a more consistent level of oxygen consumption. These results suggest that optimising time on the level sections after hills offers the greatest potential to minimise overall time when running over undulating terrain. The third study of this thesis investigated the effect of implementing an individualised pacing strategy on running performance over an undulating course. Six trained distance runners completed three trials involving four laps (9968m) of an outdoor course involving uphill, downhill and level sections. The initial trial was self-paced in the absence of any temporal feedback. For the second and third field trials, runners were paced for the first three laps (7476m) according to two different regimes (Intervention or Control) by matching desired goal times for subsections within each gradient. The fourth lap (2492m) was completed without pacing. Goals for the Intervention trial were based on findings from study two using a modified gradient factor and elapsed distance to predict the time for each section. To maintain the same overall time across all paced conditions, times were proportionately adjusted according to split times from the self-paced trial. The alternative pacing strategy (Control) used the original split times from this initial trial. Five of the six runners increased their range of uphill to downhill speeds on the Intervention trial by more than 30%, but this was unsuccessful in achieving a more consistent level of oxygen consumption with only one runner showing a change of more than 10%. Group level adherence to the Intervention strategy was lowest on downhill sections. Three runners successfully adhered to the Intervention pacing strategy which was gauged by a low Root Mean Square error across subsections and gradients. Of these three, the two who had the largest change in uphill-downhill speeds ran their fastest overall time. This suggests that for some runners the strategy of varying speeds systematically to account for gradients and transitions may benefit race performances on courses involving hills. In summary, a non – differential receiver was found to offer highly accurate measures of speed, distance and position across the range of human locomotion speeds. Self-selected speed was found to be best predicted using a weighted factor to account for prior and current gradients. Oxygen consumption limited runner’s speeds only on uphills, speed on the level was systematically influenced by preceding gradients, while there was a much larger individual variation on downhill sections. Individuals were found to adopt distinct but unrelated pacing strategies as a function of durations and gradients, while runners who varied pace more as a function of gradient showed a more consistent level of oxygen consumption. Finally, the implementation of an individualised pacing strategy to account for gradients and transitions greatly increased runners’ range of uphill-downhill speeds and was able to improve performance in some runners. The efficiency of various gradient-speed trade- offs and the factors limiting faster downhill speeds will however require further investigation to further improve the effectiveness of the suggested strategy.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The paper explores the way in which the life of concrete sleepers can be dramatically affected by two important factors, namely impact forces and fatigue cycles. Drawing on the very limited experimental and field data currently available about these two factors, the paper describes detailed simulations of sleepers in a heavy haul track in Queensland Australia over a period of 100 years. The simulation uses real wheel/rail impact force records from that track, together with data on static bending tests of similar sleepers and preliminary information on their impact vs static strength. The simulations suggest that despite successful performance over many decades, large unplanned replacement costs could be imminent, especially considering the increasingly demanding operational conditions sleepers have sustained over their life. The paper also discusses the key factors track owners need to consider in attempting to plan for these developments.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Background Heavy vehicle transportation continues to grow internationally; yet crash rates are high, and the risk of injury and death extends to all road users. The work environment for the heavy vehicle driver poses many challenges; conditions such as scheduling and payment are proposed risk factors for crash, yet the precise measure of these needs quantifying. Other risk factors such as sleep disorders including obstructive sleep apnoea have been shown to increase crash risk in motor vehicle drivers however the risk of heavy vehicle crash from this and related health conditions needs detailed investigation. Methods and Design The proposed case control study will recruit 1034 long distance heavy vehicle drivers: 517 who have crashed and 517 who have not. All participants will be interviewed at length, regarding their driving and crash history, typical workloads, scheduling and payment, trip history over several days, sleep patterns, health, and substance use. All participants will have administered a nasal flow monitor for the detection of obstructive sleep apnoea. Discussion Significant attention has been paid to the enforcement of legislation aiming to deter problems such as excess loading, speeding and substance use; however, there is inconclusive evidence as to the direction and strength of associations of many other postulated risk factors for heavy vehicle crashes. The influence of factors such as remuneration and scheduling on crash risk is unclear; so too the association between sleep apnoea and the risk of heavy vehicle driver crash. Contributory factors such as sleep quality and quantity, body mass and health status will be investigated. Quantifying the measure of effect of these factors on the heavy vehicle driver will inform policy development that aims toward safer driving practices and reduction in heavy vehicle crash; protecting the lives of many on the road network.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The present rate of technological advance continues to place significant demands on data storage devices. The sheer amount of digital data being generated each year along with consumer expectations, fuels these demands. At present, most digital data is stored magnetically, in the form of hard disk drives or on magnetic tape. The increase in areal density (AD) of magnetic hard disk drives over the past 50 years has been of the order of 100 million times, and current devices are storing data at ADs of the order of hundreds of gigabits per square inch. However, it has been known for some time that the progress in this form of data storage is approaching fundamental limits. The main limitation relates to the lower size limit that an individual bit can have for stable storage. Various techniques for overcoming these fundamental limits are currently the focus of considerable research effort. Most attempt to improve current data storage methods, or modify these slightly for higher density storage. Alternatively, three dimensional optical data storage is a promising field for the information storage needs of the future, offering very high density, high speed memory. There are two ways in which data may be recorded in a three dimensional optical medium; either bit-by-bit (similar in principle to an optical disc medium such as CD or DVD) or by using pages of bit data. Bit-by-bit techniques for three dimensional storage offer high density but are inherently slow due to the serial nature of data access. Page-based techniques, where a two-dimensional page of data bits is written in one write operation, can offer significantly higher data rates, due to their parallel nature. Holographic Data Storage (HDS) is one such page-oriented optical memory technique. This field of research has been active for several decades, but with few commercial products presently available. Another page-oriented optical memory technique involves recording pages of data as phase masks in a photorefractive medium. A photorefractive material is one by which the refractive index can be modified by light of the appropriate wavelength and intensity, and this property can be used to store information in these materials. In phase mask storage, two dimensional pages of data are recorded into a photorefractive crystal, as refractive index changes in the medium. A low-intensity readout beam propagating through the medium will have its intensity profile modified by these refractive index changes and a CCD camera can be used to monitor the readout beam, and thus read the stored data. The main aim of this research was to investigate data storage using phase masks in the photorefractive crystal, lithium niobate (LiNbO3). Firstly the experimental methods for storing the two dimensional pages of data (a set of vertical stripes of varying lengths) in the medium are presented. The laser beam used for writing, whose intensity profile is modified by an amplitudemask which contains a pattern of the information to be stored, illuminates the lithium niobate crystal and the photorefractive effect causes the patterns to be stored as refractive index changes in the medium. These patterns are read out non-destructively using a low intensity probe beam and a CCD camera. A common complication of information storage in photorefractive crystals is the issue of destructive readout. This is a problem particularly for holographic data storage, where the readout beam should be at the same wavelength as the beam used for writing. Since the charge carriers in the medium are still sensitive to the read light field, the readout beam erases the stored information. A method to avoid this is by using thermal fixing. Here the photorefractive medium is heated to temperatures above 150�C; this process forms an ionic grating in the medium. This ionic grating is insensitive to the readout beam and therefore the information is not erased during readout. A non-contact method for determining temperature change in a lithium niobate crystal is presented in this thesis. The temperature-dependent birefringent properties of the medium cause intensity oscillations to be observed for a beam propagating through the medium during a change in temperature. It is shown that each oscillation corresponds to a particular temperature change, and by counting the number of oscillations observed, the temperature change of the medium can be deduced. The presented technique for measuring temperature change could easily be applied to a situation where thermal fixing of data in a photorefractive medium is required. Furthermore, by using an expanded beam and monitoring the intensity oscillations over a wide region, it is shown that the temperature in various locations of the crystal can be monitored simultaneously. This technique could be used to deduce temperature gradients in the medium. It is shown that the three dimensional nature of the recording medium causes interesting degradation effects to occur when the patterns are written for a longer-than-optimal time. This degradation results in the splitting of the vertical stripes in the data pattern, and for long writing exposure times this process can result in the complete deterioration of the information in the medium. It is shown in that simply by using incoherent illumination, the original pattern can be recovered from the degraded state. The reason for the recovery is that the refractive index changes causing the degradation are of a smaller magnitude since they are induced by the write field components scattered from the written structures. During incoherent erasure, the lower magnitude refractive index changes are neutralised first, allowing the original pattern to be recovered. The degradation process is shown to be reversed during the recovery process, and a simple relationship is found relating the time at which particular features appear during degradation and recovery. A further outcome of this work is that the minimum stripe width of 30 ìm is required for accurate storage and recovery of the information in the medium, any size smaller than this results in incomplete recovery. The degradation and recovery process could be applied to an application in image scrambling or cryptography for optical information storage. A two dimensional numerical model based on the finite-difference beam propagation method (FD-BPM) is presented and used to gain insight into the pattern storage process. The model shows that the degradation of the patterns is due to the complicated path taken by the write beam as it propagates through the crystal, and in particular the scattering of this beam from the induced refractive index structures in the medium. The model indicates that the highest quality pattern storage would be achieved with a thin 0.5 mm medium; however this type of medium would also remove the degradation property of the patterns and the subsequent recovery process. To overcome the simplistic treatment of the refractive index change in the FD-BPM model, a fully three dimensional photorefractive model developed by Devaux is presented. This model shows significant insight into the pattern storage, particularly for the degradation and recovery process, and confirms the theory that the recovery of the degraded patterns is possible since the refractive index changes responsible for the degradation are of a smaller magnitude. Finally, detailed analysis of the pattern formation and degradation dynamics for periodic patterns of various periodicities is presented. It is shown that stripe widths in the write beam of greater than 150 ìm result in the formation of different types of refractive index changes, compared with the stripes of smaller widths. As a result, it is shown that the pattern storage method discussed in this thesis has an upper feature size limit of 150 ìm, for accurate and reliable pattern storage.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The main objective of this paper is to detail the development of a feasible hardware design based on Evolutionary Algorithms (EAs) to determine flight path planning for Unmanned Aerial Vehicles (UAVs) navigating terrain with obstacle boundaries. The design architecture includes the hardware implementation of Light Detection And Ranging (LiDAR) terrain and EA population memories within the hardware, as well as the EA search and evaluation algorithms used in the optimizing stage of path planning. A synthesisable Very-high-speed integrated circuit Hardware Description Language (VHDL) implementation of the design was developed, for realisation on a Field Programmable Gate Array (FPGA) platform. Simulation results show significant speedup compared with an equivalent software implementation written in C++, suggesting that the present approach is well suited for UAV real-time path planning applications.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Real‐time kinematic (RTK) GPS techniques have been extensively developed for applications including surveying, structural monitoring, and machine automation. Limitations of the existing RTK techniques that hinder their applications for geodynamics purposes are twofold: (1) the achievable RTK accuracy is on the level of a few centimeters and the uncertainty of vertical component is 1.5–2 times worse than those of horizontal components and (2) the RTK position uncertainty grows in proportional to the base‐torover distances. The key limiting factor behind the problems is the significant effect of residual tropospheric errors on the positioning solutions, especially on the highly correlated height component. This paper develops the geometry‐specified troposphere decorrelation strategy to achieve the subcentimeter kinematic positioning accuracy in all three components. The key is to set up a relative zenith tropospheric delay (RZTD) parameter to absorb the residual tropospheric effects and to solve the established model as an ill‐posed problem using the regularization method. In order to compute a reasonable regularization parameter to obtain an optimal regularized solution, the covariance matrix of positional parameters estimated without the RZTD parameter, which is characterized by observation geometry, is used to replace the quadratic matrix of their “true” values. As a result, the regularization parameter is adaptively computed with variation of observation geometry. The experiment results show that new method can efficiently alleviate the model’s ill condition and stabilize the solution from a single data epoch. Compared to the results from the conventional least squares method, the new method can improve the longrange RTK solution precision from several centimeters to the subcentimeter in all components. More significantly, the precision of the height component is even higher. Several geosciences applications that require subcentimeter real‐time solutions can largely benefit from the proposed approach, such as monitoring of earthquakes and large dams in real‐time, high‐precision GPS leveling and refinement of the vertical datum. In addition, the high‐resolution RZTD solutions can contribute to effective recovery of tropospheric slant path delays in order to establish a 4‐D troposphere tomography.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Introduction: Why we need to base childrens’ sport and physical education on the principles of dynamical systems theory and ecological psychology As the childhood years are crucial for developing many physical skills as well as establishing the groundwork leading to lifelong participation in sport and physical activities, (Orlick & Botterill, 1977, p. 11) it is essential to examine current practice to make sure it is meeting the needs of children. In recent papers (e.g. Renshaw, Davids, Chow & Shuttleworth, in press; Renshaw, Davids, Chow & Hammond, in review; Chow et al., 2009) we have highlighted that a guiding theoretical framework is needed to provide a principled approach to teaching and coaching and that the approach must be evidence- based and focused on mechanism and not just on operational issues such as practice, competition and programme management (Lyle, 2002). There is a need to demonstrate how nonlinear pedagogy underpins teaching and coaching practice for children given that some of the current approaches underpinning children’s sport and P.E. may not be leading to optimal results. For example, little time is spent undertaking physical activities (Tinning, 2006) and much of this practice is not representative of the competition demands of the performance environment (Kirk & McPhail, 2002; Renshaw et al., 2008). Proponents of a non- linear pedagogy advocate the design of practice by applying key concepts such as the mutuality of the performer and environment, the tight coupling of perception and action, and the emergence of movement solutions due to self organisation under constraints (see Renshaw, et al., in press). As skills are shaped by the unique interacting individual, task and environmental constraints in these learning environments, small changes to individual structural (e.g. factors such as height or limb length) or functional constraints (e.g. factors such as motivation, perceptual skills, strength that can be acquired), task rules, equipment, or environmental constraints can lead to dramatic changes in movement patterns adopted by learners to solve performance problems. The aim of this chapter is to provide real life examples for teachers and coaches who wish to adopt the ideas of non- linear pedagogy in their practice. Specifically, I will provide examples related to specific issues related to individual constraints in children and in particular the unique challenges facing coaches when individual constraints are changing due to growth and development. Part two focuses on understanding how cultural environmental constraints impact on children’s sport. This is an area that has received very little attention but plays a very important part in the long- term development of sporting expertise. Finally, I will look at how coaches can manipulate task constraints to create effective learning environments for young children.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Humankind has been dealing with all kinds of disasters since the dawn of time. The risk and impact of disasters producing mass casualties worldwide is increasing, due partly to global warming as well as to increased population growth, increased density and the aging population. China, as a country with a large population, vast territory, and complex climatic and geographical conditions, has been plagued by all kinds of disasters. Disaster health management has traditionally been a relatively arcane discipline within public health. However, SARS, Avian Influenza, and earthquakes and floods, along with the need to be better prepared for the Olympic Games in China has brought disasters, their management and their potential for large scale health consequences on populations to the attention of the public, the government and the international community alike. As a result significant improvements were made to the disaster management policy framework, as well as changes to systems and structures to incorporate an improved disaster management focus. This involved the upgrade of the Centres for Disease Control and Prevention (CDC) throughout China to monitor and better control the health consequences particularly of infectious disease outbreaks. However, as can be seen in the Southern China Snow Storm and Wenchuan Earthquake in 2008, there remains a lack of integrated disaster management and efficient medical rescue, which has been costly in terms of economics and health for China. In the context of a very large and complex country, there is a need to better understand whether these changes have resulted in effective management of the health impacts of such incidents. To date, the health consequences of disasters, particularly in China, have not been a major focus of study. The main aim of this study is to analyse and evaluate disaster health management policy in China and in particular, its ability to effectively manage the health consequences of disasters. Flood has been selected for this study as it is a common and significant disaster type in China and throughout the world. This information will then be used to guide conceptual understanding of the health consequences of floods. A secondary aim of the study is to compare disaster health management in China and Australia as these countries differ in their length of experience in having a formalised policy response. The final aim of the study is to determine the extent to which Walt and Gilson’s (1994) model of policy explains how disaster management policy in China was developed and implemented after SARS in 2003 to the present day. This study has utilised a case study methodology. A document analysis and literature search of Chinese and English sources was undertaken to analyse and produce a chronology of disaster health management policy in China. Additionally, three detailed case studies of flood health management in China were undertaken along with three case studies in Australia in order to examine the policy response and any health consequences stemming from the floods. A total of 30 key international disaster health management experts were surveyed to identify fundamental elements and principles of a successful policy framework for disaster health management. Key policy ingredients were identified from the literature, the case-studies and the survey of experts. Walt and Gilson (1994)’s policy model that focuses on the actors, content, context and process of policy was found to be a useful model for analysing disaster health management policy development and implementation in China. This thesis is divided into four parts. Part 1 is a brief overview of the issues and context to set the scene. Part 2 examines the conceptual and operational context including the international literature, government documents and the operational environment for disaster health management in China. Part 3 examines primary sources of information to inform the analysis. This involves two key studies: • A comparative analysis of the management of floods in China and Australia • A survey of international experts in the field of disaster management so as to inform the evaluation of the policy framework in existence in China and the criteria upon which the expression of that policy could be evaluated Part 4 describes the key outcomes of this research which include: • A conceptual framework for describing the health consequences of floods • A conceptual framework for disaster health management • An evaluation of the disaster health management policy and its implementation in China. The research outcomes clearly identified that the most significant improvements are to be derived from improvements in the generic management of disasters, rather than the health aspects alone. Thus, the key findings and recommendations tend to focus on generic issues. The key findings of this research include the following: • The health consequences of floods may be described in terms of time as ‘immediate’, ‘medium term’ and ‘long term’ and also in relation to causation as ‘direct’ and ‘indirect’ consequences of the flood. These two aspects form a matrix which in turn guides management responses. • Disaster health management in China requires a more comprehensive response throughout the cycle of prevention, preparedness, response and recovery but it also requires a more concentrated effort on policy implementation to ensure the translation of the policy framework into effective incident management. • The policy framework in China is largely of international standard with a sound legislative base. In addition the development of the Centres for Disease Control and Prevention has provided the basis for a systematic approach to health consequence management. However, the key weaknesses in the current system include: o The lack of a key central structure to provide the infrastructure with vital support for policy development, implementation and evaluation. o The lack of well-prepared local response teams similar to local government based volunteer groups in Australia. • The system lacks structures to coordinate government action at the local level. The result of this is a poorly coordinated local response and lack of clarity regarding the point at which escalation of the response to higher levels of government is advisable. These result in higher levels of risk and negative health impacts. The key recommendations arising from this study are: 1. Disaster health management policy in China should be enhanced by incorporating disaster management considerations into policy development, and by requiring a disaster management risk analysis and disaster management impact statement for development proposals. 2. China should transform existing organizations to establish a central organisation similar to the Federal Emergency Management Agency (FEMA) in the USA or the Emergency Management Australia (EMA) in Australia. This organization would be responsible for leading nationwide preparedness through planning, standards development, education and incident evaluation and to provide operational support to the national and local government bodies in the event of a major incident. 3. China should review national and local plans to reflect consistency in planning, and to emphasize the advantages of the integrated planning process. 4. Enhance community resilience through community education and the development of a local volunteer organization. China should develop a national strategy which sets direction and standards in regard to education and training, and requires system testing through exercises. Other initiatives may include the development of a local volunteer capability with appropriate training to assist professional response agencies such as police and fire services in a major incident. An existing organisation such as the Communist Party may be an appropriate structure to provide this response in a cost effective manner. 5. Continue development of professional emergency services, particularly ambulance, to ensure an effective infrastructure is in place to support the emergency response in disasters. 6. Funding for disaster health management should be enhanced, not only from government, but also from other sources such as donations and insurance. It is necessary to provide a more transparent mechanism to ensure the funding is disseminated according to the needs of the people affected. 7. Emphasis should be placed on prevention and preparedness, especially on effective disaster warnings. 8. China should develop local disaster health management infrastructure utilising existing resources wherever possible. Strategies for enhancing local infrastructure could include the identification of local resources (including military resources) which could be made available to support disaster responses. It should develop operational procedures to access those resources. Implementation of these recommendations should better position China to reduce the significant health consequences experienced each year from major incidents such as floods and to provide an increased level of confidence to the community about the country’s capacity to manage such events.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This article examines the current transfer pricing regime to consider whether it is a sound model to be applied to modern multinational entities. The arm's length price methodology is examined to enable a discussion of the arguments in favour of such a regime. The article then refutes these arguments concluding that, contrary to the very reason multinational entities exist, applying arm's length rules involves a legal fiction of imagining transactions between unrelated parties. Multinational entities exist to operate in a way that independent entities would not, which the arm's length rules fail to take into account. As such, there is clearly an air of artificiality in applying the arm's length standard. To demonstrate this artificiality with respect to modern multinational entities, multinational banks are used as an example. The article concluded that the separate entity paradigm adopted by the traditional transfer pricing regime is incongruous with the economic theory of modern multinational enterprises.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Total hip arthroplasty (THA) has a proven clinical record for providing pain relief and return of function to patients with disabling arthritis. There are many successful options for femoral implant design and fixation. Cemented, polished, tapered femoral implants have been shown to have excellent results in national joint registries and long-term clinical series. These implants are usually 150mm long at their lateral aspect. Due to their length, these implants cannot always be offered to patients due to variations in femoral anatomy. Polished, tapered implants as short as 95mm exist, however their small proximal geometry (neck offset and body size) limit their use to smaller stature patients. There is a group of patients in which a shorter implant with a maintained proximal body size would be advantageous. There are also potential benefits to a shorter implant in standard patient populations such as reduced bone removal due to reduced reaming, favourable loading of the proximal femur, and the ability to revise into good proximal bone stock if required. These factors potentially make a shorter implant an option for all patient populations. The role of implant length in determining the stability of a cemented, polished, tapered femoral implant is not well defined by the literature. Before changes in implant design can be made, a better understanding of the role of each region in determining performance is required. The aim of the thesis was to describe how implant length affects the stability of a cemented, polished, tapered femoral implant. This has been determined through an extensive body of laboratory testing. The major findings are that for a given proximal body size, a reduction in implant length has no effect on the torsional stability of a polished, tapered design, while a small reduction in axial stability should be expected. These findings are important because the literature suggests that torsional stability is the major determinant of long-term clinical performance of a THA system. Furthermore, a polished, tapered design is known to be forgiving of cement-implant interface micromotion due to the favourable wear characteristics. Together these findings suggest that a shorter polished, tapered implant may be well tolerated. The effect of a change in implant length on the geometric characteristics of polished, tapered design were also determined and applied to the mechanical testing. Importantly, interface area does play a role in stability of the system; however it is the distribution of the interface and not the magnitude of the area that defines stability. Taper angle (at least in the range of angles seen in this work) was shown not to be a determinant of axial or torsional stability. A range of implants were tested, comparing variations in length, neck offset and indication (primary versus cement-in-cement revision). At their manufactured length, the 125mm implants were similar to their longer 150mm counterparts suggesting that they may be similarly well tolerated in the clinical environment. However, the slimmer cement-in-cement revision implant was shown to have a poorer mechanical performance, suggesting their use in higher demand patients may be hazardous. An implant length of 125mm has been shown to be quite stable and the results suggest that a further reduction to 100mm may be tolerated. However, further work is required. A shorter implant with maintained proximal body size would be useful for the group of patients who are unable to access the current standard length implants due to variations in femoral anatomy. Extending the findings further, the similar function with potential benefits of a shorter implant make their application to all patients appealing.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

3D models of long bones are being utilised for a number of fields including orthopaedic implant design. Accurate reconstruction of 3D models is of utmost importance to design accurate implants to allow achieving a good alignment between two bone fragments. Thus for this purpose, CT scanners are employed to acquire accurate bone data exposing an individual to a high amount of ionising radiation. Magnetic resonance imaging (MRI) has been shown to be a potential alternative to computed tomography (CT) for scanning of volunteers for 3D reconstruction of long bones, essentially avoiding the high radiation dose from CT. In MRI imaging of long bones, the artefacts due to random movements of the skeletal system create challenges for researchers as they generate inaccuracies in the 3D models generated by using data sets containing such artefacts. One of the defects that have been observed during an initial study is the lateral shift artefact occurring in the reconstructed 3D models. This artefact is believed to result from volunteers moving the leg during two successive scanning stages (the lower limb has to be scanned in at least five stages due to the limited scanning length of the scanner). As this artefact creates inaccuracies in the implants designed using these models, it needs to be corrected before the application of 3D models to implant design. Therefore, this study aimed to correct the lateral shift artefact using 3D modelling techniques. The femora of five ovine hind limbs were scanned with a 3T MRI scanner using a 3D vibe based protocol. The scanning was conducted in two halves, while maintaining a good overlap between them. A lateral shift was generated by moving the limb several millimetres between two scanning stages. The 3D models were reconstructed using a multi threshold segmentation method. The correction of the artefact was achieved by aligning the two halves using the robust iterative closest point (ICP) algorithm, with the help of the overlapping region between the two. The models with the corrected artefact were compared with the reference model generated by CT scanning of the same sample. The results indicate that the correction of the artefact was achieved with an average deviation of 0.32 ± 0.02 mm between the corrected model and the reference model. In comparison, the model obtained from a single MRI scan generated an average error of 0.25 ± 0.02 mm when compared with the reference model. An average deviation of 0.34 ± 0.04 mm was seen when the models generated after the table was moved were compared to the reference models; thus, the movement of the table is also a contributing factor to the motion artefacts.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The average structure (CI) of a volcanic plagioclase megacryst with composition Ano, from the Hogarth Ranges, Australia, has been determined using three-dimensional, singlecrystal neutron and X-ray diffraction data. Least squaresr efinements, incorporating anisotropic thermal motion of all atoms and an extinction correction, resulted in weighted R factors (based on intensities) of 0.076 and 0.056, respectively, for the neutron and X-ray data. Very weak e reflections could be detected in long-exposure X-ray and electron diffraction photographs of this crystal, but the refined average structure is believed to be unaffected by the presence of such a weak superstructure. The ratio of the scattering power of Na to that of Ca is different for X ray and neutron radiation, and this radiation-dependence of scattering power has been used to determine the distribution of Na and Ca over a split-atom M site (two sites designated M' and M") in this Ano, plagioclase. Relative peak-height ratios M'/M", revealed in difference Fourier sections calculated from neutron and X-ray data, formed the basis for the cation-distribution analysis. As neutron and X-ray data sets were directly compared in this analysis, it was important that systematic bias between refined neutron and X-ray positional parameters could be demonstrated to be absent. In summary, with an M-site model constrained only by the electron-microprobedetermined bulk composition of the crystal, the following values were obtained for the M-site occupanciesN: ar, : 0.29(7),N ar. : 0.23(7),C ar, : 0.15(4),a nd Car" : 0.33(4). These results indicate that restrictive assumptions about M sites, on which previous plagioclase refinements have been based, are not applicable to this Ano, and possibly not to the entire compositional range. T-site ordering determined by (T-O) bond-length variation-t,o : 0.51(l), trm = t2o = t2m = 0.32(l)-is weak, as might be expectedf rom the volcanic origin of this megacryst.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Ocean gliders constitute an important advance in the highly demanding ocean monitoring scenario. Their effciency, endurance and increasing robustness make these vehicles an ideal observing platform for many long term oceanographic applications. However, they have proved to be also useful in the opportunis-tic short term characterization of dynamic structures. Among these, mesoscale eddies are of particular interest due to the relevance they have in many oceano-graphic processes.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Wing length is a key character for essential behaviours related to bird flight such as migration and foraging. In the present study, we initiate the search for the genes underlying wing length in birds by studying a long-distance migrant, the great reed warbler (Acrocephalus arundinaceus). In this species wing length is an evolutionary interesting trait with pronounced latitudinal gradient and sex-specific selection regimes in local populations. We performed a quantitative trait locus (QTL) scan for wing length in great reed warblers using phenotypic, genotypic, pedigree and linkage map data from our long-term study population in Sweden. We applied the linkage analysis mapping method implemented in GRIDQTL (a new web-based software) and detected a genome-wide significant QTL for wing length on chromosome 2, to our knowledge, the first detected QTL in wild birds. The QTL extended over 25 cM and accounted for a substantial part (37%) of the phenotypic variance of the trait. A genome scan for tarsus length (a bodysize-related trait) did not show any signal, implying that the wing-length QTL on chromosome 2 was not associated with body size. Our results provide a first important step into understanding the genetic architecture of avian wing length, and give opportunities to study the evolutionary dynamics of wing length at the locus level. This journal is© 2010 The Royal Society.