767 resultados para Peace, journalism, framing theory, agenda setting
Resumo:
Impedance cardiography is an application of bioimpedance analysis primarily used in a research setting to determine cardiac output. It is a non invasive technique that measures the change in the impedance of the thorax which is attributed to the ejection of a volume of blood from the heart. The cardiac output is calculated from the measured impedance using the parallel conductor theory and a constant value for the resistivity of blood. However, the resistivity of blood has been shown to be velocity dependent due to changes in the orientation of red blood cells induced by changing shear forces during flow. The overall goal of this thesis was to study the effect that flow deviations have on the electrical impedance of blood, both experimentally and theoretically, and to apply the results to a clinical setting. The resistivity of stationary blood is isotropic as the red blood cells are randomly orientated due to Brownian motion. In the case of blood flowing through rigid tubes, the resistivity is anisotropic due to the biconcave discoidal shape and orientation of the cells. The generation of shear forces across the width of the tube during flow causes the cells to align with the minimal cross sectional area facing the direction of flow. This is in order to minimise the shear stress experienced by the cells. This in turn results in a larger cross sectional area of plasma and a reduction in the resistivity of the blood as the flow increases. Understanding the contribution of this effect on the thoracic impedance change is a vital step in achieving clinical acceptance of impedance cardiography. Published literature investigates the resistivity variations for constant blood flow. In this case, the shear forces are constant and the impedance remains constant during flow at a magnitude which is less than that for stationary blood. The research presented in this thesis, however, investigates the variations in resistivity of blood during pulsataile flow through rigid tubes and the relationship between impedance, velocity and acceleration. Using rigid tubes isolates the impedance change to variations associated with changes in cell orientation only. The implications of red blood cell orientation changes for clinical impedance cardiography were also explored. This was achieved through measurement and analysis of the experimental impedance of pulsatile blood flowing through rigid tubes in a mock circulatory system. A novel theoretical model including cell orientation dynamics was developed for the impedance of pulsatile blood through rigid tubes. The impedance of flowing blood was theoretically calculated using analytical methods for flow through straight tubes and the numerical Lattice Boltzmann method for flow through complex geometries such as aortic valve stenosis. The result of the analytical theoretical model was compared to the experimental impedance measurements through rigid tubes. The impedance calculated for flow through a stenosis using the Lattice Boltzmann method provides results for comparison with impedance cardiography measurements collected as part of a pilot clinical trial to assess the suitability of using bioimpedance techniques to assess the presence of aortic stenosis. The experimental and theoretical impedance of blood was shown to inversely follow the blood velocity during pulsatile flow with a correlation of -0.72 and -0.74 respectively. The results for both the experimental and theoretical investigations demonstrate that the acceleration of the blood is an important factor in determining the impedance, in addition to the velocity. During acceleration, the relationship between impedance and velocity is linear (r2 = 0.98, experimental and r2 = 0.94, theoretical). The relationship between the impedance and velocity during the deceleration phase is characterised by a time decay constant, ô , ranging from 10 to 50 s. The high level of agreement between the experimental and theoretically modelled impedance demonstrates the accuracy of the model developed here. An increase in the haematocrit of the blood resulted in an increase in the magnitude of the impedance change due to changes in the orientation of red blood cells. The time decay constant was shown to decrease linearly with the haematocrit for both experimental and theoretical results, although the slope of this decrease was larger in the experimental case. The radius of the tube influences the experimental and theoretical impedance given the same velocity of flow. However, when the velocity was divided by the radius of the tube (labelled the reduced average velocity) the impedance response was the same for two experimental tubes with equivalent reduced average velocity but with different radii. The temperature of the blood was also shown to affect the impedance with the impedance decreasing as the temperature increased. These results are the first published for the impedance of pulsatile blood. The experimental impedance change measured orthogonal to the direction of flow is in the opposite direction to that measured in the direction of flow. These results indicate that the impedance of blood flowing through rigid cylindrical tubes is axisymmetric along the radius. This has not previously been verified experimentally. Time frequency analysis of the experimental results demonstrated that the measured impedance contains the same frequency components occuring at the same time point in the cycle as the velocity signal contains. This suggests that the impedance contains many of the fluctuations of the velocity signal. Application of a theoretical steady flow model to pulsatile flow presented here has verified that the steady flow model is not adequate in calculating the impedance of pulsatile blood flow. The success of the new theoretical model over the steady flow model demonstrates that the velocity profile is important in determining the impedance of pulsatile blood. The clinical application of the impedance of blood flow through a stenosis was theoretically modelled using the Lattice Boltzman method (LBM) for fluid flow through complex geometeries. The impedance of blood exiting a narrow orifice was calculated for varying degrees of stenosis. Clincial impedance cardiography measurements were also recorded for both aortic valvular stenosis patients (n = 4) and control subjects (n = 4) with structurally normal hearts. This pilot trial was used to corroborate the results of the LBM. Results from both investigations showed that the decay time constant for impedance has potential in the assessment of aortic valve stenosis. In the theoretically modelled case (LBM results), the decay time constant increased with an increase in the degree of stenosis. The clinical results also showed a statistically significant difference in time decay constant between control and test subjects (P = 0.03). The time decay constant calculated for test subjects (ô = 180 - 250 s) is consistently larger than that determined for control subjects (ô = 50 - 130 s). This difference is thought to be due to difference in the orientation response of the cells as blood flows through the stenosis. Such a non-invasive technique using the time decay constant for screening of aortic stenosis provides additional information to that currently given by impedance cardiography techniques and improves the value of the device to practitioners. However, the results still need to be verified in a larger study. While impedance cardiography has not been widely adopted clinically, it is research such as this that will enable future acceptance of the method.
Resumo:
Over the last three years, in our Early Algebra Thinking Project, we have been studying Years 3 to 5 students’ ability to generalise in a variety of situations, namely, compensation principles in computation, the balance principle in equivalence and equations, change and inverse change rules with function machines, and pattern rules with growing patterns. In these studies, we have attempted to involve a variety of models and representations and to build students’ abilities to switch between them (in line with the theories of Dreyfus, 1991, and Duval, 1999). The results have shown the negative effect of closure on generalisation in symbolic representations, the predominance of single variance generalisation over covariant generalisation in tabular representations, and the reduced ability to readily identify commonalities and relationships in enactive and iconic representations. This chapter uses the results to explore the interrelation between generalisation and verbal and visual comprehension of context. The studies evidence the importance of understanding and communicating aspects of representational forms which allowed commonalities to be seen across or between representations. Finally the chapter explores the implications of the studies for a theory that describes a growth in integration of models and representations that leads to generalisation.
Resumo:
Gen Y students are digital natives (Prensky 2001) who learn in complex and diverse ways, with a variety of learning styles apparent in any given course. This paper proposes a web 2.0 conceptual learning solution–online student videos–to respond to different learning styles that exist in the classroom.
Resumo:
Neo-liberalism has become one of the boom concepts of our time. From its original reference point as a descriptor of the economics of the ‘Chicago School’ or authors such as Friedrich von Hayek, neo-liberalism has become an all-purpose concept, explanatory device and basis for social critique. This presentation evaluates Michel Foucault’s 1978–79 lectures, published as The Birth of Biopolitics, to consider how he used the term neo-liberalism, and how this equates with its current uses in critical social and cultural theory. It will be argued that Foucault did not understand neo-liberalism as a dominant ideology in these lectures, but rather as marking a point of inflection in the historical evolution of liberal political philosophies of government. It will also be argued that his interpretation of neo-liberalism was more nuanced and more comparative than more recent contributions. The article points towards an attempt to theorize comparative historical models of liberal capitalism.
Resumo:
This paper identifies factors underpinning the emergence of citizen journalism, including the rise of Web 2.0, rethinking journalism as a professional ideology, the decline of ‘high modernist’ journalism, divergence between elite and popular opinion, changing revenue bases for news production, and the decline of deference in democratic societies. It will connect these issues to wider debates about the implications of journalism and news production increasingly going into the Internet environment.
Resumo:
There are at least four key challenges in the online news environment that computational journalism may address. Firstly, news providers operate in a rapidly evolving environment and larger businesses are typically slower to adapt to market innovations. News consumption patterns have changed and news providers need to find new ways to capture and retain digital users. Meanwhile, declining financial performance has led to cost cuts in mass market newspapers. Finally investigative reporting is typically slow, high cost and may be tedious, and yet is valuable to the reputation of a news provider. Computational journalism involves the application of software and technologies to the activities of journalism, and it draws from the fields of computer science, social science and communications. New technologies may enhance the traditional aims of journalism, or may require “a new breed of people who are midway between technologists and journalists” (Irfan Essa in Mecklin 2009: 3). Historically referred to as ‘computer assisted reporting’, the use of software in online reportage is increasingly valuable due to three factors: larger datasets are becoming publicly available; software is becoming sophisticated and ubiquitous; and the developing Australian digital economy. This paper introduces key elements of computational journalism – it describes why it is needed; what it involves; benefits and challenges; and provides a case study and examples. Computational techniques can quickly provide a solid factual basis for original investigative journalism and may increase interaction with readers, when correctly used. It is a major opportunity to enhance the delivery of original investigative journalism, which ultimately may attract and retain readers online.
Resumo:
The purpose of this paper is to highlight important issues in the study of dysfunctional customer behavior and to provide a research agenda to inspire, guide, and enthuse. Through a critical evaluation of existing research, the aim is to highlight key issues and to present potentially worthy avenues for future study.
Resumo:
The present study tested the utility of an extended version of the theory of planned behaviour that included a measure of planning, in the prediction of eating foods low in saturated fats among adults diagnosed with Type 2 diabetes and/or cardiovascular disease. Participants (N = 184) completed questionnaires assessing standard theory of planned behaviour measures (attitude, subjective norm, and perceived behavioural control) and the additional volitional variable of planning in relation to eating foods low in saturated fats. Self-report consumption of foods low insaturated fats was assessed 1 month later. In partial support of the theory of planned behaviour, results indicated that attitude and subjective norm predicted intentions to eat foods low in saturated fats and intentions and perceived behavioural control predicted the consumption of foods low in saturated fats. As an additional variable, planning predicted the consumption of foods low in saturated fats directly and also mediated the intention–behaviour and perceived behavioural control–behaviour relationships, suggesting an important role for planning as a post-intentional construct determining healthy eating choices. Suggestions are offered for interventions designed to improve adherence to healthy eating recommendations for people diagnosed with these chronic conditions with a specific emphasis on the steps and activities that are required to promote a healthier lifestyle.
Resumo:
This article examines the moment of exchange between artist, audience and culture in Live Art. Drawing on historical and contemporary examples, including examples from the Exist in 08 Live Art Event in Brisbane, Australia, in October 2008, it argues that Live Art - be it body art, activist art, site-specific performance, or other sorts of performative intervention in the public sphere - is characterised by a common set of claims about activating audiences, asking them to reflect on cultural norms challenged in the work. Live Art presents risky actions, in a context that blurs the boundaries between art and reality, to position audients as ‘witnesses’ who are personally implicated in, and responsible for, the actions unfolding before them. This article problematises assumptions about the way the uncertainties embedded in the Live Art encounter contribute to its deconstructive agenda. It uses the ethical theory of Emmanuel Levinas, Hans-Thies Lehmann and Dwight Conquergood to examine the mechanics of reductive, culturally-recuperative readings that can limit the efficacy of the Live Art encounter. It argues that, though ‘witnessing’ in Live Art depends on a relation to the real - real people, taking real risks, in real places - if it fails to foreground theatrical frame it is difficult for audients to develop the dual consciousness of the content, and their complicity in that content, that is the starting point for reflexivity, and response-ability, in the ethical encounter.
Resumo:
This paper discusses the content, origin and development of Tendering Theory as a theory of price determination. It demonstrates how tendering theory determines market prices and how it is different from game and decision theories, and that in the tendering process, with non-cooperative, simultaneous, single sealed bids with individual private valuations, extensive public information, a large number of bidders and a long sequence of tendering occasions, there develops a competitive equilibrium. The development of a competitive equilibrium means that the concept of the tender as the sum of a valuation and a strategy, which is at the core of tendering theory, cannot be supported and that there are serious empirical, theoretical and methodological inconsistencies in the theory.
Resumo:
Background: A State-based industry in Australia is in the process of developing a programme to prevent AOD impairment in the workplace. The objective of this study was to determine whether the Theory of Planned Behaviour can help explain the mechanisms by which behaviour change occurs with regard to AOD impairment in the workplace. ---------- Method: A survey of 1165 employees of a State-based industry in Australia was conducted, and a response rate of 98% was achieved. The survey included questions relevant to the Theory of Planned Behaviour: behaviour; behavioural intentions; attitude; perceptions of social pressure; and perceived behavioural control with regard to workplace AOD impairment. ---------- Findings: Less than 3% of participants reported coming to work impaired by AODs. Fewer than 2% of participants reported that they intended to come to work impaired by AODs. The majority of participants (over 80%) reported unfavourable attitudes toward AOD impairment at work. Logistic regression analyses suggest that, consistent with the theory of planned behaviour: attitudes, perceptions of social pressure, and perceived behavioural control with regard to workplace AOD impairment, all predict behavioural intentions (P < .001); and behavioural intentions predict (self-reported) behaviour regarding workplace AOD impairment (P < .001). ---------- Conclusions: The Theory of Planned Behaviour appears to assist with understanding the mechanisms by which behaviour change occurs with regard to AOD impairment in the workplace. An occupational AOD programme which targets those mechanisms for change may improve its impact in preventing workplace AOD impairment.
Resumo:
Mechanical control systems have become a part of our everyday life. Systems such as automobiles, robot manipulators, mobile robots, satellites, buildings with active vibration controllers and air conditioning systems, make life easier and safer, as well as help us explore the world we live in and exploit it’s available resources. In this chapter, we examine a specific example of a mechanical control system; the Autonomous Underwater Vehicle (AUV). Our contribution to the advancement of AUV research is in the area of guidance and control. We present innovative techniques to design and implement control strategies that consider the optimization of time and/or energy consumption. Recent advances in robotics, control theory, portable energy sources and automation increase our ability to create more intelligent robots, and allows us to conduct more explorations by use of autonomous vehicles. This facilitates access to higher risk areas, longer time underwater, and more efficient exploration as compared to human occupied vehicles. The use of underwater vehicles is expanding in every area of ocean science. Such vehicles are used by oceanographers, archaeologists, geologists, ocean engineers, and many others. These vehicles are designed to be agile, versatile and robust, and thus, their usage has gone from novelty to necessity for any ocean expedition.
Resumo:
This paper serves as a first study on the implementation of control strategies developed using a kinematic reduction onto test bed autonomous underwater vehicles (AUVs). The equations of motion are presented in the framework of differential geometry, including external dissipative forces, as a forced affine connection control system. We show that the hydrodynamic drag forces can be included in the affine connection, resulting in an affine connection control system. The definitions of kinematic reduction and decoupling vector field are thus extended from the ideal fluid scenario. Control strategies are computed using this new extension and are reformulated for implementation onto a test-bed AUV. We compare these geometrically computed controls to time and energy optimal controls for the same trajectory which are computed using a previously developed algorithm. Through this comparison we are able to validate our theoretical results based on the experiments conducted using the time and energy efficient strategies.
Resumo:
This dissertation is based on theoretical study and experiments which extend geometric control theory to practical applications within the field of ocean engineering. We present a method for path planning and control design for underwater vehicles by use of the architecture of differential geometry. In addition to the theoretical design of the trajectory and control strategy, we demonstrate the effectiveness of the method via the implementation onto a test-bed autonomous underwater vehicle. Bridging the gap between theory and application is the ultimate goal of control theory. Major developments have occurred recently in the field of geometric control which narrow this gap and which promote research linking theory and application. In particular, Riemannian and affine differential geometry have proven to be a very effective approach to the modeling of mechanical systems such as underwater vehicles. In this framework, the application of a kinematic reduction allows us to calculate control strategies for fully and under-actuated vehicles via kinematic decoupled motion planning. However, this method has not yet been extended to account for external forces such as dissipative viscous drag and buoyancy induced potentials acting on a submerged vehicle. To fully bridge the gap between theory and application, this dissertation addresses the extension of this geometric control design method to include such forces. We incorporate the hydrodynamic drag experienced by the vehicle by modifying the Levi-Civita affine connection and demonstrate a method for the compensation of potential forces experienced during a prescribed motion. We present the design method for multiple different missions and include experimental results which validate both the extension of the theory and the ability to implement control strategies designed through the use of geometric techniques. By use of the extension presented in this dissertation, the underwater vehicle application successfully demonstrates the applicability of geometric methods to design implementable motion planning solutions for complex mechanical systems having equal or fewer input forces than available degrees of freedom. Thus, we provide another tool with which to further increase the autonomy of underwater vehicles.