366 resultados para DEGREE OF CONVERSION
Resumo:
The concept of radar was developed for the estimation of the distance (range) and velocity of a target from a receiver. The distance measurement is obtained by measuring the time taken for the transmitted signal to propagate to the target and return to the receiver. The target's velocity is determined by measuring the Doppler induced frequency shift of the returned signal caused by the rate of change of the time- delay from the target. As researchers further developed conventional radar systems it become apparent that additional information was contained in the backscattered signal and that this information could in fact be used to describe the shape of the target itself. It is due to the fact that a target can be considered to be a collection of individual point scatterers, each of which has its own velocity and time- delay. DelayDoppler parameter estimation of each of these point scatterers thus corresponds to a mapping of the target's range and cross range, thus producing an image of the target. Much research has been done in this area since the early radar imaging work of the 1960s. At present there are two main categories into which radar imaging falls. The first of these is related to the case where the backscattered signal is considered to be deterministic. The second is related to the case where the backscattered signal is of a stochastic nature. In both cases the information which describes the target's scattering function is extracted by the use of the ambiguity function, a function which correlates the backscattered signal in time and frequency with the transmitted signal. In practical situations, it is often necessary to have the transmitter and the receiver of the radar system sited at different locations. The problem in these situations is 'that a reference signal must then be present in order to calculate the ambiguity function. This causes an additional problem in that detailed phase information about the transmitted signal is then required at the receiver. It is this latter problem which has led to the investigation of radar imaging using time- frequency distributions. As will be shown in this thesis, the phase information about the transmitted signal can be extracted from the backscattered signal using time- frequency distributions. The principle aim of this thesis was in the development, and subsequent discussion into the theory of radar imaging, using time- frequency distributions. Consideration is first given to the case where the target is diffuse, ie. where the backscattered signal has temporal stationarity and a spatially white power spectral density. The complementary situation is also investigated, ie. where the target is no longer diffuse, but some degree of correlation exists between the time- frequency points. Computer simulations are presented to demonstrate the concepts and theories developed in the thesis. For the proposed radar system to be practically realisable, both the time- frequency distributions and the associated algorithms developed must be able to be implemented in a timely manner. For this reason an optical architecture is proposed. This architecture is specifically designed to obtain the required time and frequency resolution when using laser radar imaging. The complex light amplitude distributions produced by this architecture have been computer simulated using an optical compiler.
Resumo:
The studies in the thesis were derived from a program of research focused on centre-based child care in Australia. The studies constituted an ecological analysis as they examined proximal and distal factors which have the potential to affect children's developmental opportunities (Bronfenbrenner, 1979). The project was conducted in thirty-two child care centres located in south-east Queensland. Participants in the research included staff members at the centres, families using the centres and their children. The first study described the personal and professional characteristics of one hundred and forty-four child care workers, as well as their job satisfaction and job commitment. Factors impinging on the stability of care afforded to children were examined, specifically child care workers' intentions to leave their current position and actual staff turnover at a twelve month follow-up. This is an ecosystem analysis (Bronfenbrenner & Crouter, 1983), as it examined the world of work for carers; a setting not directly involving the developing child, but which has implications for children's experiences. Staff job satisfaction was focused on working with children and other adults, including parents and colleagues. Involvement with children was reported as being the most rewarding aspect of the work. This intrinsic satisfaction was enough to sustain caregivers' efforts to maintain their employment in child care programs. It was found that, while improving working conditions may help to reduce turnover, it is likely that moderate turnover rates will remain as child care staff work in relatively small centres and they leave in order to improve career prospects. Departure from a child care job appeared to be as much about improving career opportunities or changing personal circumstances, as it was about poor wages and working conditions. In the second study, factors that influence maternal satisfaction with child care arrangements were examined. The focus included examination of the nature and qualities of parental interaction with staff. This was a mesosystem analysis (Bronfenbrenner & Crouter, 1983), as it considered the links between family and child care settings. Two hundred and twenty-two questionnaires were returned from mothers whose children were enrolled in the participating centres. It was found that maternal satisfaction with child care encompassed the domains of child-centred and parent-centred satisfaction. The nature and range of responses in the quantitative and qualitative data indicated that these parents were genuinely satisfied with their children's care. In the prediction of maternal satisfaction with child care, single parents, mothers with high role satisfaction, and mothers who were satisfied with the frequency of staff contact and degree of supportive communication had higher levels of satisfaction with their child care arrangements. The third study described the structural and process variations within child care programs and examined program differences for compliance with regulations and differences by profit status of the centre, as a microsystem analysis (Bronfenbrenner, 1979). Observations were made in eighty-three programs which served children from two to five years. The results of the study affirmed beliefs that nonprofit centres are superior in the quality of care provided, although this was not to a level which meant that the care in for-profit centres was inadequate. Regulation of structural features of child care programs, per se, did not guarantee higher quality child care as measured by global or process indicators. The final study represented an integration of a range of influences in child care and family settings which may impact on development. Features of child care programs which predict children's social and cognitive development, while taking into account child and family characteristics, were identified. Results were consistent with other research findings which show that child and family characteristics and child care quality predict children's development. Child care quality was more important to the prediction of social development, while family factors appeared to be more predictive of cognitive/language development. An influential variable predictive of development was the period of time which the child had been in the centre. This highlighted the importance of the stability of child care arrangements. Child care quality features which had most influence were global ratings of the qualities of the program environment. However, results need to be interpreted cautiously as the explained variance in the predictive models developed was low. The results of these studies are discussed in terms of the implications for practice and future research. Considerations for an expanded view of ecological approaches to child care research are outlined. Issues discussed include the need to generate child care research which is relevant to social policy development, the implications of market driven policies for child care services, professionalism and professionalisation of child care work, and the need to reconceptualise child care research when the goal is to develop greater theoretical understanding about child care environments and developmental processes.
Resumo:
Computer aided technologies, medical imaging, and rapid prototyping has created new possibilities in biomedical engineering. The systematic variation of scaffold architecture as well as the mineralization inside a scaffold/bone construct can be studied using computer imaging technology and CAD/CAM and micro computed tomography (CT). In this paper, the potential of combining these technologies has been exploited in the study of scaffolds and osteochondral repair. Porosity, surface area per unit volume and the degree of interconnectivity were evaluated through imaging and computer aided manipulation of the scaffold scan data. For the osteochondral model, the spatial distribution and the degree of bone regeneration were evaluated. In this study the versatility of two softwares Mimics (Materialize), CTan and 3D realistic visualization (Skyscan) were assessed, too.
Resumo:
Knowledge of differences in the demographics of contact lens prescribing between nations, and changes over time, can assist (a) the contact lens industry in developing and promoting various product types in different world regions, and (b) practitioners in understanding their prescribing habits in an international context. Data that we have gathered from annual contact lens fitting surveys conducted in Australia, Canada, Japan, the Netherlands, Norway, the UK and the USA between 2000 and 2008 reveal an ageing demographic, with Japan being the most youthful. The majority of fits are to females, with statistically significant differences between nations, ranging from 62 per cent of fits in Norway to 68 per cent in Japan. The small overall decline in the proportion of new fits, and commensurate increase in refits, over the survey periodmay indicate a growing rate of conversion of lens wearers to more advanced lens types, such as silicone hydrogels. � 2009 British Contact Lens Association.
Resumo:
A degree of judicial caution in accepting the assertion of a plaintiff as to what he or she would have done, if fully informed of risks, is clearly evident upon a review of decisions applying the common law. Civil liability legislation in some jurisdictions now precludes assertion evidence by a plaintiff. Although this legislative change was seen as creating a significant challenge for plaintiffs seeking to discharge the onus of proof of establishing causation in such cases, recent decisions suggest a more limited practical effect. While a plaintiff’s ex post facto assertions as to what he or she would have done if fully informed of risks may now be inadmissible, objective and subjective evidence as to the surrounding facts and circumstances, in particular the plaintiff’s prior attitudes and conduct, and the assertion evidence of others remains admissible. Given the court’s reliance on both objective and subjective evidence, statistical evidence may be of increasing importance.
Resumo:
Principal Topic Venture ideas are at the heart of entrepreneurship (Davidsson, 2004). However, we are yet to learn what factors drive entrepreneurs’ perceptions of the attractiveness of venture ideas, and what the relative importance of these factors are for their decision to pursue an idea. The expected financial gain is one factor that will obviously influence the perceived attractiveness of a venture idea (Shepherd & DeTienne, 2005). In addition, the degree of novelty of venture ideas along one or more dimensions such as new products/services, new method of production, enter into new markets/customer and new method of promotion may affect their attractiveness (Schumpeter, 1934). Further, according to the notion of an individual-opportunity nexus venture ideas are closely associated with certain individual characteristics (relatedness). Shane (2000) empirically identified that individual’s prior knowledge is closely associated with the recognition of venture ideas. Sarasvathy’s (2001; 2008) Effectuation theory proposes a high degree of relatedness between venture ideas and the resource position of the individual. This study examines how entrepreneurs weigh considerations of different forms of novelty and relatedness as well as potential financial gain in assessing the attractiveness of venture ideas. Method I use conjoint analysis to determine how expert entrepreneurs develop preferences for venture ideas which involved with different degrees of novelty, relatedness and potential gain. The conjoint analysis estimates respondents’ preferences in terms of utilities (or part-worth) for each level of novelty, relatedness and potential gain of venture ideas. A sample of 32 expert entrepreneurs who were awarded young entrepreneurship awards were selected for the study. Each respondent was interviewed providing with 32 scenarios which explicate different combinations of possible profiles open them into consideration. Results and Implications Results indicate that while the respondents do not prefer mere imitation they receive higher utility for low to medium degree of newness suggesting that high degrees of newness are fraught with greater risk and/or greater resource needs. Respondents pay considerable weight on alignment with the knowledge and skills they already posses in choosing particular venture idea. The initial resource position of entrepreneurs is not equally important. Even though expected potential financial gain gives substantial utility, result indicate that it is not a dominant factor for the attractiveness of venture idea.
Resumo:
The widespread use of business planning in combination with the mixed theoretical and empirical support for its effect suggest research is needed that takes a deeper into the quality of plans and how they are used. In this study we longitudinally examine use vs. non-use; degree of formalizations; revision of plans, and moderation of planning effects by product novelty,among nascent firms. We relate these to attainment of profitability after 12 months. We find that business planning is negatively related to profitability, but that revising plans is positively related to profitability. Both these effects are stronger under conditions of high product novelty.
Resumo:
One of the main causes of above knee or transfemoral amputation (TFA) in the developed world is trauma to the limb. The number of people undergoing TFA due to limb trauma, particularly due to war injuries, has been increasing. Typically the trauma amputee population, including war-related amputees, are otherwise healthy, active and desire to return to employment and their usual lifestyle. Consequently there is a growing need to restore long-term mobility and limb function to this population. Traditionally transfemoral amputees are provided with an artificial or prosthetic leg that consists of a fabricated socket, knee joint mechanism and a prosthetic foot. Amputees have reported several problems related to the socket of their prosthetic limb. These include pain in the residual limb, poor socket fit, discomfort and poor mobility. Removing the socket from the prosthetic limb could eliminate or reduce these problems. A solution to this is the direct attachment of the prosthesis to the residual bone (femur) inside the residual limb. This technique has been used on a small population of transfemoral amputees since 1990. A threaded titanium implant is screwed in to the shaft of the femur and a second component connects between the implant and the prosthesis. A period of time is required to allow the implant to become fully attached to the bone, called osseointegration (OI), and be able to withstand applied load; then the prosthesis can be attached. The advantages of transfemoral osseointegration (TFOI) over conventional prosthetic sockets include better hip mobility, sitting comfort and prosthetic retention and fewer skin problems on the residual limb. However, due to the length of time required for OI to progress and to complete the rehabilitation exercises, it can take up to twelve months after implant insertion for an amputee to be able to load bear and to walk unaided. The long rehabilitation time is a significant disadvantage of TFOI and may be impeding the wider adoption of the technique. There is a need for a non-invasive method of assessing the degree of osseointegration between the bone and the implant. If such a method was capable of determining the progression of TFOI and assessing when the implant was able to withstand physiological load it could reduce the overall rehabilitation time. Vibration analysis has been suggested as a potential technique: it is a non destructive method of assessing the dynamic properties of a structure. Changes in the physical properties of a structure can be identified from changes in its dynamic properties. Consequently vibration analysis, both experimental and computational, has been used to assess bone fracture healing, prosthetic hip loosening and dental implant OI with varying degrees of success. More recently experimental vibration analysis has been used in TFOI. However further work is needed to assess the potential of the technique and fully characterise the femur-implant system. The overall aim of this study was to develop physical and computational models of the TFOI femur-implant system and use these models to investigate the feasibility of vibration analysis to detect the process of OI. Femur-implant physical models were developed and manufactured using synthetic materials to represent four key stages of OI development (identified from a physiological model), simulated using different interface conditions between the implant and femur. Experimental vibration analysis (modal analysis) was then conducted using the physical models. The femur-implant models, representing stage one to stage four of OI development, were excited and the modal parameters obtained over the range 0-5kHz. The results indicated the technique had limited capability in distinguishing between different interface conditions. The fundamental bending mode did not alter with interfacial changes. However higher modes were able to track chronological changes in interface condition by the change in natural frequency, although no one modal parameter could uniquely distinguish between each interface condition. The importance of the model boundary condition (how the model is constrained) was the key finding; variations in the boundary condition altered the modal parameters obtained. Therefore the boundary conditions need to be held constant between tests in order for the detected modal parameter changes to be attributed to interface condition changes. A three dimensional Finite Element (FE) model of the femur-implant model was then developed and used to explore the sensitivity of the modal parameters to more subtle interfacial and boundary condition changes. The FE model was created using the synthetic femur geometry and an approximation of the implant geometry. The natural frequencies of the FE model were found to match the experimental frequencies within 20% and the FE and experimental mode shapes were similar. Therefore the FE model was shown to successfully capture the dynamic response of the physical system. As was found with the experimental modal analysis, the fundamental bending mode of the FE model did not alter due to changes in interface elastic modulus. Axial and torsional modes were identified by the FE model that were not detected experimentally; the torsional mode exhibited the largest frequency change due to interfacial changes (103% between the lower and upper limits of the interface modulus range). Therefore the FE model provided additional information on the dynamic response of the system and was complementary to the experimental model. The small changes in natural frequency over a large range of interface region elastic moduli indicated the method may only be able to distinguish between early and late OI progression. The boundary conditions applied to the FE model influenced the modal parameters to a far greater extent than the interface condition variations. Therefore the FE model, as well as the experimental modal analysis, indicated that the boundary conditions need to be held constant between tests in order for the detected changes in modal parameters to be attributed to interface condition changes alone. The results of this study suggest that in a clinical setting it is unlikely that the in vivo boundary conditions of the amputated femur could be adequately controlled or replicated over time and consequently it is unlikely that any longitudinal change in frequency detected by the modal analysis technique could be attributed exclusively to changes at the femur-implant interface. Therefore further development of the modal analysis technique would require significant consideration of the clinical boundary conditions and investigation of modes other than the bending modes.
Resumo:
From 27 January to 8 February during the summer of 2009, southern Australia experienced one of the nation‘s most severe heatwaves. Governments, councils, utilities, hospitals and emergency response organisations and the community were largely underprepared for an extreme event of this magnitude. This case study targets the experience and challenges faced by decision makers and policy makers and focuses on the major metropolitan areas affected by the heatwave — Melbourne and Adelaide. The study examines the 2009 heatwave‘s characteristics; its impacts (on human health, infrastructure and human services); the degree of adaptive capacity (vulnerability and resilience) of various sectors, communities and individuals; and the reactive responses of government and emergency and associated services and their effectiveness. Barriers and challenges to adaptation and increasing resilience are also identified and further areas for research are suggested. This study does not include details of the heatwave‘s effects beyond Victoria and South Australia, or its economic impacts, or of Victoria‘s 'Black Saturday‘ bushfires.
Resumo:
In mobile videos, small viewing size and bitrate limitation often cause unpleasant viewing experiences, which is particularly important for fast-moving sports videos. For optimizing the overall user experience of viewing sports videos on mobile phones, this paper explores the benefits of emphasizing Region of Interest (ROI) by 1) zooming in and 2) enhancing the quality. The main goal is to measure the effectiveness of these two approaches and determine which one is more effective. To obtain a more comprehensive understanding of the overall user experience, the study considers user’s interest in video content and user’s acceptance of the perceived video quality, and compares the user experience in sports videos with other content types such as talk shows. The results from a user study with 40 subjects demonstrate that zooming and ROI-enhancement are both effective in improving the overall user experience with talk show and mid-shot soccer videos. However, for the full-shot scenes in soccer videos, only zooming is effective while ROI-enhancement has a negative effect. Moreover, user’s interest in video content directly affects not only the user experience and the acceptance of video quality, but also the effect of content type on the user experience. Finally, the overall user experience is closely related to the degree of the acceptance of video quality and the degree of the interest in video content. This study is valuable in exploiting effective approaches to improve user experience, especially in mobile sports video streaming contexts, whereby the available bandwidth is usually low or limited. It also provides further understanding of the influencing factors of user experience.
Resumo:
Little is known about the psychological underpinnings of young people’s mobile phone behaviour. In the present research, 292 young Australians, aged 16–24 years, completed an online survey assessing the effects of self-identity, in-group norm, the need to belong, and self-esteem on their frequency of mobile phone use and mobile phone involvement, conceptualised as people’s degree of cognitive and behavioural association with their mobile phone. Structural equation modelling revealed that age (younger) and self-identity significantly predicted the frequency of mobile phone use. In contrast, age (younger), gender (female), self-identity and in-group norm predicted young people’s mobile phone involvement. Neither self-esteem nor the need to belong significantly predicted mobile phone behaviour. The present study contributes to our understanding of this phenomenon and provides an indication of the characteristics of young people who may become highly involved with their mobile phone.
Resumo:
It has now been over a decade since the concept of creative industries was first put into the public domain through the Creative Industries Mapping Documents developed by the Blair Labour government in Britain. The concept has developed traction globally, but it has also been understood and developed in different ways in Europe, Asia, Australia, New Zealand and North America, as well as through international bodies such as UNCTAD and UNESCO. A review of the policy literature reveals that while questions and issues remain around definitional coherence, there is some degree of consensus emerging about the size, scope and significance of the sectors in question in both advanced and developing economies. At the same time, debate about the concept remains highly animated in media, communication and cultural studies, with its critics dismissing the concept outright as a harbinger of neo-liberal ideology in the cultural sphere. This paper couches such critiques in light of recent debates surrounding the intellectual coherence of the concept of neo-liberalism, arguing that this term itself possesses problems when taken outside of the Anglo-American context in which it originated. It is argued that issues surrounding the nature of participatory media culture, the relationship between cultural production and economic innovation, and the future role of public cultural institutions can be developed from within a creative industries framework, and that writing off such arguments as a priori ideological and flawed does little to advance debates about 21st century information and media culture.
Resumo:
It is widely contended that we live in a „world risk society‟, where risk plays a central and ubiquitous role in contemporary social life. A seminal contributor to this view is Ulrich Beck, who claims that our world is governed by dangers that cannot be calculated or insured against. For Beck, risk is an inherently unrestrained phenomenon, emerging from a core and pouring out from and under national borders, unaffected by state power. Beck‟s focus on risk's ubiquity and uncontrollability at an infra-global level means that there is a necessary evenness to the expanse of risk: a "universalization of hazards‟, which possess an inbuilt tendency towards globalisation. While sociological scholarship has examined the reach and impact of globalisation processes on the role and power of states, Beck‟s argument that economic risk is without territory and resistant to domestic policy has come under less appraisal. This is contestable: what are often described as global economic processes, on closer inspection, reveal degrees of territorial embeddedness. This not only suggests that "global‟ flows could sometimes be more appropriately explained as international, regional or even local processes, formed from and responsive to state strategies – but also demonstrates what can be missed if we overinflate the global. This paper briefly introduces two key principles of Beck's theory of risk society and positions them within a review of literature debating the novelty and degree of global economic integration and its impact on states pursuing domestic economic policies. In doing so, this paper highlights the value for future research to engage with questions such as "is economic risk really without territory‟ and "does risk produce convergence‟, not so much as a means of reducing Beck's thesis to a purely empirical analysis, but rather to avoid limiting our scope in understanding the complex relationship between risk and state.
Resumo:
Impedance cardiography is an application of bioimpedance analysis primarily used in a research setting to determine cardiac output. It is a non invasive technique that measures the change in the impedance of the thorax which is attributed to the ejection of a volume of blood from the heart. The cardiac output is calculated from the measured impedance using the parallel conductor theory and a constant value for the resistivity of blood. However, the resistivity of blood has been shown to be velocity dependent due to changes in the orientation of red blood cells induced by changing shear forces during flow. The overall goal of this thesis was to study the effect that flow deviations have on the electrical impedance of blood, both experimentally and theoretically, and to apply the results to a clinical setting. The resistivity of stationary blood is isotropic as the red blood cells are randomly orientated due to Brownian motion. In the case of blood flowing through rigid tubes, the resistivity is anisotropic due to the biconcave discoidal shape and orientation of the cells. The generation of shear forces across the width of the tube during flow causes the cells to align with the minimal cross sectional area facing the direction of flow. This is in order to minimise the shear stress experienced by the cells. This in turn results in a larger cross sectional area of plasma and a reduction in the resistivity of the blood as the flow increases. Understanding the contribution of this effect on the thoracic impedance change is a vital step in achieving clinical acceptance of impedance cardiography. Published literature investigates the resistivity variations for constant blood flow. In this case, the shear forces are constant and the impedance remains constant during flow at a magnitude which is less than that for stationary blood. The research presented in this thesis, however, investigates the variations in resistivity of blood during pulsataile flow through rigid tubes and the relationship between impedance, velocity and acceleration. Using rigid tubes isolates the impedance change to variations associated with changes in cell orientation only. The implications of red blood cell orientation changes for clinical impedance cardiography were also explored. This was achieved through measurement and analysis of the experimental impedance of pulsatile blood flowing through rigid tubes in a mock circulatory system. A novel theoretical model including cell orientation dynamics was developed for the impedance of pulsatile blood through rigid tubes. The impedance of flowing blood was theoretically calculated using analytical methods for flow through straight tubes and the numerical Lattice Boltzmann method for flow through complex geometries such as aortic valve stenosis. The result of the analytical theoretical model was compared to the experimental impedance measurements through rigid tubes. The impedance calculated for flow through a stenosis using the Lattice Boltzmann method provides results for comparison with impedance cardiography measurements collected as part of a pilot clinical trial to assess the suitability of using bioimpedance techniques to assess the presence of aortic stenosis. The experimental and theoretical impedance of blood was shown to inversely follow the blood velocity during pulsatile flow with a correlation of -0.72 and -0.74 respectively. The results for both the experimental and theoretical investigations demonstrate that the acceleration of the blood is an important factor in determining the impedance, in addition to the velocity. During acceleration, the relationship between impedance and velocity is linear (r2 = 0.98, experimental and r2 = 0.94, theoretical). The relationship between the impedance and velocity during the deceleration phase is characterised by a time decay constant, ô , ranging from 10 to 50 s. The high level of agreement between the experimental and theoretically modelled impedance demonstrates the accuracy of the model developed here. An increase in the haematocrit of the blood resulted in an increase in the magnitude of the impedance change due to changes in the orientation of red blood cells. The time decay constant was shown to decrease linearly with the haematocrit for both experimental and theoretical results, although the slope of this decrease was larger in the experimental case. The radius of the tube influences the experimental and theoretical impedance given the same velocity of flow. However, when the velocity was divided by the radius of the tube (labelled the reduced average velocity) the impedance response was the same for two experimental tubes with equivalent reduced average velocity but with different radii. The temperature of the blood was also shown to affect the impedance with the impedance decreasing as the temperature increased. These results are the first published for the impedance of pulsatile blood. The experimental impedance change measured orthogonal to the direction of flow is in the opposite direction to that measured in the direction of flow. These results indicate that the impedance of blood flowing through rigid cylindrical tubes is axisymmetric along the radius. This has not previously been verified experimentally. Time frequency analysis of the experimental results demonstrated that the measured impedance contains the same frequency components occuring at the same time point in the cycle as the velocity signal contains. This suggests that the impedance contains many of the fluctuations of the velocity signal. Application of a theoretical steady flow model to pulsatile flow presented here has verified that the steady flow model is not adequate in calculating the impedance of pulsatile blood flow. The success of the new theoretical model over the steady flow model demonstrates that the velocity profile is important in determining the impedance of pulsatile blood. The clinical application of the impedance of blood flow through a stenosis was theoretically modelled using the Lattice Boltzman method (LBM) for fluid flow through complex geometeries. The impedance of blood exiting a narrow orifice was calculated for varying degrees of stenosis. Clincial impedance cardiography measurements were also recorded for both aortic valvular stenosis patients (n = 4) and control subjects (n = 4) with structurally normal hearts. This pilot trial was used to corroborate the results of the LBM. Results from both investigations showed that the decay time constant for impedance has potential in the assessment of aortic valve stenosis. In the theoretically modelled case (LBM results), the decay time constant increased with an increase in the degree of stenosis. The clinical results also showed a statistically significant difference in time decay constant between control and test subjects (P = 0.03). The time decay constant calculated for test subjects (ô = 180 - 250 s) is consistently larger than that determined for control subjects (ô = 50 - 130 s). This difference is thought to be due to difference in the orientation response of the cells as blood flows through the stenosis. Such a non-invasive technique using the time decay constant for screening of aortic stenosis provides additional information to that currently given by impedance cardiography techniques and improves the value of the device to practitioners. However, the results still need to be verified in a larger study. While impedance cardiography has not been widely adopted clinically, it is research such as this that will enable future acceptance of the method.
Resumo:
Driver aggression is an increasing concern for motorists, with some research suggesting that drivers who behave aggressively perceive their actions as justified by the poor driving of others. Thus attributions may play an important role in understanding driver aggression. A convenience sample of 193 drivers (aged 17-36) randomly assigned to two separate roles (‘perpetrators’ and ‘victims’) responded to eight scenarios of driver aggression. Drivers also completed the Aggression Questionnaire and Driving Anger Scale. Consistent with the actor-observer bias, ‘victims’ (or recipients) in this study were significantly more likely than ‘perpetrators’ (or instigators) to endorse inadequacies in the instigator’s driving skills as the cause of driver aggression. Instigators were significantly more likely attribute the depicted behaviours to external but temporary causes (lapses in judgement or errors) rather than stable causes. This suggests that instigators recognised drivers as responsible for driving aggressively but downplayed this somewhat in comparison to ‘victims’/recipients. Recipients and instigators agreed that the behaviours were examples of aggressive driving but instigators appeared to focus on the degree of intentionality of the driver in making their assessments while recipients appeared to focus on the safety implications. Contrary to expectations, instigators gave mean ratings of the emotional impact of driving aggression on recipients that were higher in all cases than the mean ratings given by the recipients. Drivers appear to perceive aggressive behaviours as modifiable, with the implication that interventions could appeal to drivers’ sense of self-efficacy to suggest strategies for overcoming plausible and modifiable attributions (e.g. lapses in judgement; errors) underpinning behaviours perceived as aggressive.