726 resultados para Additional somatosensory information
Contextualizing the tensions and weaknesses of information privacy and data breach notification laws
Resumo:
Data breach notification laws have detailed numerous failures relating to the protection of personal information that have blighted both corporate and governmental institutions. There are obvious parallels between data breach notification and information privacy law as they both involve the protection of personal information. However, a closer examination of both laws reveals conceptual differences that give rise to vertical tensions between each law and shared horizontal weaknesses within both laws. Tensions emanate from conflicting approaches to the implementation of information privacy law that results in different regimes and the implementation of different types of protections. Shared weaknesses arise from an overt focus on specified types of personal information which results in ‘one size fits all’ legal remedies. The author contends that a greater contextual approach which promotes the importance of social context is required and highlights the effect that contextualization could have on both laws.
Resumo:
The concept of sustainable urban development has been pushed to the forefront of policy-making and politics as the world wakes up to the impacts of climate change and the effects of modern urban lifestyles. Today, sustainable development has become a very prominent element in the day-to-day debate on urban policy and the expression of that policy in urban planning and development decisions. As a result of this, during the last few years, sustainable development automation applications such as sustainable urban development decision support systems have become popular tools as they offer new opportunities for local governments to realise their sustainable development agendas. This chapter explores a range of issues associated with the application of information and communication technologies and decision support systems in the process of underpinning sustainable urban development. The chapter considers how information and communication technologies can be applied to enhance urban planning, raise environmental awareness, share decisions and improve public participation. It introduces and explores three web-based geographical information systems projects as best practice. These systems are developed as support tools to include public opinion in the urban planning and development processes, and to provide planners with comprehensive tools for the analysis of sustainable urban development variants in order to prepare the best plans for constructing sustainable urban communities and futures.
Resumo:
Most information retrieval (IR) models treat the presence of a term within a document as an indication that the document is somehow "about" that term, they do not take into account when a term might be explicitly negated. Medical data, by its nature, contains a high frequency of negated terms - e.g. "review of systems showed no chest pain or shortness of breath". This papers presents a study of the effects of negation on information retrieval. We present a number of experiments to determine whether negation has a significant negative affect on IR performance and whether language models that take negation into account might improve performance. We use a collection of real medical records as our test corpus. Our findings are that negation has some affect on system performance, but this will likely be confined to domains such as medical data where negation is prevalent.
Resumo:
The development of locally-based healthcare initiatives, such as community health coalitions that focus on capacity building programs and multi-faceted responses to long-term health problems, have become an increasingly important part of the public health landscape. As a result of their complexity and the level of investment, it has become necessary to develop innovative ways to help manage these new healthcare approaches. Geographical Information Systems (GIS) have been suggested as one of the innovative approaches that will allow community health coalitions to better manage and plan their activities. The focus of this paper is to provide a commentary on the use of GIS as a tool for community coalitions and discuss some of the potential benefits and issues surrounding the development of these tools.
Informed learning: a pedagogical construct attending simultaneously to information use and learning.
Resumo:
The idea of informed learning, applicable in academic, workplace and community settings, has been derived largely from a program of phenomenographic research in the field of information literacy, which has illuminated the experience of using information to learn. Informed learning is about simultaneous attention to information use and learning, where both information and learning are considered to be relational; and is built upon a series of key concepts such as second–order perspective, simultaneity, awareness, and relationality. Informed learning also relies heavily on reflection as a strategy for bringing about learning. As a pedagogical construct, informed learning supports inclusive curriculum design and implementation. This paper reports aspects of the informed learning research agenda which are currently being pursued at the Queensland University of Technology (QUT). The first part elaborates the idea of informed learning, examines the key concepts underpinning this pedagogical construct, and explains its emergence from the research base of the QUT Information Studies research team. The second presents a case, which demonstrates the ongoing development of informed learning theory and practice, through the development of inclusive informed learning for a culturally diverse higher education context.
Resumo:
One of the main causes of above knee or transfemoral amputation (TFA) in the developed world is trauma to the limb. The number of people undergoing TFA due to limb trauma, particularly due to war injuries, has been increasing. Typically the trauma amputee population, including war-related amputees, are otherwise healthy, active and desire to return to employment and their usual lifestyle. Consequently there is a growing need to restore long-term mobility and limb function to this population. Traditionally transfemoral amputees are provided with an artificial or prosthetic leg that consists of a fabricated socket, knee joint mechanism and a prosthetic foot. Amputees have reported several problems related to the socket of their prosthetic limb. These include pain in the residual limb, poor socket fit, discomfort and poor mobility. Removing the socket from the prosthetic limb could eliminate or reduce these problems. A solution to this is the direct attachment of the prosthesis to the residual bone (femur) inside the residual limb. This technique has been used on a small population of transfemoral amputees since 1990. A threaded titanium implant is screwed in to the shaft of the femur and a second component connects between the implant and the prosthesis. A period of time is required to allow the implant to become fully attached to the bone, called osseointegration (OI), and be able to withstand applied load; then the prosthesis can be attached. The advantages of transfemoral osseointegration (TFOI) over conventional prosthetic sockets include better hip mobility, sitting comfort and prosthetic retention and fewer skin problems on the residual limb. However, due to the length of time required for OI to progress and to complete the rehabilitation exercises, it can take up to twelve months after implant insertion for an amputee to be able to load bear and to walk unaided. The long rehabilitation time is a significant disadvantage of TFOI and may be impeding the wider adoption of the technique. There is a need for a non-invasive method of assessing the degree of osseointegration between the bone and the implant. If such a method was capable of determining the progression of TFOI and assessing when the implant was able to withstand physiological load it could reduce the overall rehabilitation time. Vibration analysis has been suggested as a potential technique: it is a non destructive method of assessing the dynamic properties of a structure. Changes in the physical properties of a structure can be identified from changes in its dynamic properties. Consequently vibration analysis, both experimental and computational, has been used to assess bone fracture healing, prosthetic hip loosening and dental implant OI with varying degrees of success. More recently experimental vibration analysis has been used in TFOI. However further work is needed to assess the potential of the technique and fully characterise the femur-implant system. The overall aim of this study was to develop physical and computational models of the TFOI femur-implant system and use these models to investigate the feasibility of vibration analysis to detect the process of OI. Femur-implant physical models were developed and manufactured using synthetic materials to represent four key stages of OI development (identified from a physiological model), simulated using different interface conditions between the implant and femur. Experimental vibration analysis (modal analysis) was then conducted using the physical models. The femur-implant models, representing stage one to stage four of OI development, were excited and the modal parameters obtained over the range 0-5kHz. The results indicated the technique had limited capability in distinguishing between different interface conditions. The fundamental bending mode did not alter with interfacial changes. However higher modes were able to track chronological changes in interface condition by the change in natural frequency, although no one modal parameter could uniquely distinguish between each interface condition. The importance of the model boundary condition (how the model is constrained) was the key finding; variations in the boundary condition altered the modal parameters obtained. Therefore the boundary conditions need to be held constant between tests in order for the detected modal parameter changes to be attributed to interface condition changes. A three dimensional Finite Element (FE) model of the femur-implant model was then developed and used to explore the sensitivity of the modal parameters to more subtle interfacial and boundary condition changes. The FE model was created using the synthetic femur geometry and an approximation of the implant geometry. The natural frequencies of the FE model were found to match the experimental frequencies within 20% and the FE and experimental mode shapes were similar. Therefore the FE model was shown to successfully capture the dynamic response of the physical system. As was found with the experimental modal analysis, the fundamental bending mode of the FE model did not alter due to changes in interface elastic modulus. Axial and torsional modes were identified by the FE model that were not detected experimentally; the torsional mode exhibited the largest frequency change due to interfacial changes (103% between the lower and upper limits of the interface modulus range). Therefore the FE model provided additional information on the dynamic response of the system and was complementary to the experimental model. The small changes in natural frequency over a large range of interface region elastic moduli indicated the method may only be able to distinguish between early and late OI progression. The boundary conditions applied to the FE model influenced the modal parameters to a far greater extent than the interface condition variations. Therefore the FE model, as well as the experimental modal analysis, indicated that the boundary conditions need to be held constant between tests in order for the detected changes in modal parameters to be attributed to interface condition changes alone. The results of this study suggest that in a clinical setting it is unlikely that the in vivo boundary conditions of the amputated femur could be adequately controlled or replicated over time and consequently it is unlikely that any longitudinal change in frequency detected by the modal analysis technique could be attributed exclusively to changes at the femur-implant interface. Therefore further development of the modal analysis technique would require significant consideration of the clinical boundary conditions and investigation of modes other than the bending modes.
Resumo:
Increasingly, celebrities appear not only as endorsers for products but are apparently engaged in entrepreneurial roles as initiators, owners and perhaps even managers in the ventures that market the products they promote. Despite being extensively referred to in popular media, scholars have been slow to recognise the importance of this new phenomenon. This thesis argues theoretically and shows empirically that celebrity entrepreneurs are more effective communicators than typical celebrity endorsers because of their increased engagement with ventures. I theorise that greater engagement increases the celebrity‘s emotional involvement as perceived by consumers. This is an endorser quality thus far neglected in the marketing communications literature. In turn, emotional involvement, much like the empirically established dimensions trustworthiness, expertise and attractiveness, should affect traditional outcome variables such as attitude towards the advertisement and brand. On the downside, increases in celebrity engagement may lead to relatively stronger and worsening changes in attitudes towards the brand if and when negative information about the celebrity is revealed. A series of eight experiments was conducted on 781 Swedish and Baltic students and 151 Swedish retirees. Though there were nuanced differences and additional complexities in each experiment, participants‘ reactions to advertisements containing a celebrity portrayed as a typical endorser or entrepreneur were recorded. The overall results of these experiments suggest that emotional involvement can be successfully operationalised as distinct from variables previously known to influence communication effectiveness. In addition, emotional involvement has positive effects on attitudes toward the advertisement and brand that are as strong as the predictors traditionally applied in the marketing communications literature. Moreover, the celebrity entrepreneur condition in the experimental manipulation consistently led to an increase in emotional involvement and to a lesser extent trustworthiness, but not expertise and attractiveness. Finally, negative celebrity information led to a change in participants‘ attitudes towards the brand which were more strongly negative for celebrity entrepreneurs than celebrity endorsers. In addition, the effect of negative celebrity information on a company‘s brand is worse when they support the celebrity rather than fire them. However, this effect did not appear to interact with the celebrity‘s purported engagement.
Resumo:
This article explores the quality of accounting information in listed family firms. The authors exploit the features of the Italian equitymarket characterizd by high ownership concentration across all tpes of firms to disentangle the effects of family ownership from other major block holders on the quality of accounting information. The findings document that family firms convey financial information of higher quality compared to the nonfamily peers. Furthermore the authors provide evidence that the determinants of accounting quality differ across family and nonfamily firms.
Resumo:
Given there is currently a migration trend from traditional electrical supervisory control and data acquisition (SCADA) systems towards a smart grid based approach to critical infrastructure management. This project provides an evaluation of existing and proposed implementations for both traditional electrical SCADA and smart grid based architectures, and proposals a set of reference requirements which test bed implementations should implement. A high-level design for smart grid test beds is proposed and initial implementation performed, based on the proposed design, using open source and freely available software tools. The project examines the move towards smart grid based critical infrastructure management and illustrates the increased security requirements. The implemented test bed provides a basic framework for testing network requirements in a smart grid environment, as well as a platform for further research and development. Particularly to develop, implement and test network security related disturbances such as intrusion detection and network forensics. The project undertaken proposes and develops an architecture of the emulation of some smart grid functionality. The Common Open Research Emulator (CORE) platform was used to emulate the communication network of the smart grid. Specifically CORE was used to virtualise and emulate the TCP/IP networking stack. This is intended to be used for further evaluation and analysis, for example the analysis of application protocol messages, etc. As a proof of concept, software libraries were designed, developed and documented to enable and support the design and development of further smart grid emulated components, such as reclosers, switches, smart meters, etc. As part of the testing and evaluation a Modbus based smart meter emulator was developed to provide basic functionality of a smart meter. Further code was developed to send Modbus request messages to the emulated smart meter and receive Modbus responses from it. Although the functionality of the emulated components were limited, it does provide a starting point for further research and development. The design is extensible to enable the design and implementation of additional SCADA protocols. The project also defines an evaluation criteria for the evaluation of the implemented test bed, and experiments are designed to evaluate the test bed according to the defined criteria. The results of the experiments are collated and presented, and conclusions drawn from the results to facilitate discussion on the test bed implementation. The discussion undertaken also present possible future work.
Resumo:
Aim. This paper is a report of a review conducted to identify (a) best practice in information transfer from the emergency department for multi-trauma patients; (b) conduits and barriers to information transfer in trauma care and related settings; and (c) interventions that have an impact on information communication at handover and beyond. Background. Information transfer is integral to effective trauma care, and communication breakdown results in important challenges to this. However, evidence of adequacy of structures and processes to ensure transfer of patient information through the acute phase of trauma care is limited. Data sources. Papers were sourced from a search of 12 online databases and scanning references from relevant papers for 1990–2009. Review methods. The review was conducted according to the University of York’s Centre for Reviews and Dissemination guidelines. Studies were included if they concerned issues that influenced information transfer for patients in healthcare settings. Results. Forty-five research papers, four literature reviews and one policy statement were found to be relevant to parts of the topic, but not all of it. The main issues emerging concerned the impact of communication breakdown in some form, and included communication issues within trauma team processes, lack of structure and clarity during handovers including missing, irrelevant and inaccurate information, distractions and poorly documented care. Conclusion. Many factors influence information transfer but are poorly identified in relation to trauma care. The measurement of information transfer, which is integral to patient handover, has not been the focus of research to date. Nonetheless, documented patient information is considered evidence of care and a resource that affects continuing care.
Resumo:
This research shows that gross pollutant traps (GPTs) continue to play an important role in preventing visible street waste—gross pollutants—from contaminating the environment. The demand for these GPTs calls for stringent quality control and this research provides a foundation to rigorously examine the devices. A novel and comprehensive testing approach to examine a dry sump GPT was developed. The GPT is designed with internal screens to capture gross pollutants—organic matter and anthropogenic litter. This device has not been previously investigated. Apart from the review of GPTs and gross pollutant data, the testing approach includes four additional aspects to this research, which are: field work and an historical overview of street waste/stormwater pollution, calibration of equipment, hydrodynamic studies and gross pollutant capture/retention investigations. This work is the first comprehensive investigation of its kind and provides valuable practical information for the current research and any future work pertaining to the operations of GPTs and management of street waste in the urban environment. Gross pollutant traps—including patented and registered designs developed by industry—have specific internal configurations and hydrodynamic separation characteristics which demand individual testing and performance assessments. Stormwater devices are usually evaluated by environmental protection agencies (EPAs), professional bodies and water research centres. In the USA, the American Society of Civil Engineers (ASCE) and the Environmental Water Resource Institute (EWRI) are examples of professional and research organisations actively involved in these evaluation/verification programs. These programs largely rely on field evaluations alone that are limited in scope, mainly for cost and logistical reasons. In Australia, evaluation/verification programs of new devices in the stormwater industry are not well established. The current limitations in the evaluation methodologies of GPTs have been addressed in this research by establishing a new testing approach. This approach uses a combination of physical and theoretical models to examine in detail the hydrodynamic and capture/retention characteristics of the GPT. The physical model consisted of a 50% scale model GPT rig with screen blockages varying from 0 to 100%. This rig was placed in a 20 m flume and various inlet and outflow operating conditions were modelled on observations made during the field monitoring of GPTs. Due to infrequent cleaning, the retaining screens inside the GPTs were often observed to be blocked with organic matter. Blocked screens can radically change the hydrodynamic and gross pollutant capture/retention characteristics of a GPT as shown from this research. This research involved the use of equipment, such as acoustic Doppler velocimeters (ADVs) and dye concentration (Komori) probes, which were deployed for the first time in a dry sump GPT. Hence, it was necessary to rigorously evaluate the capability and performance of these devices, particularly in the case of the custom made Komori probes, about which little was known. The evaluation revealed that the Komori probes have a frequency response of up to 100 Hz —which is dependent upon fluid velocities—and this was adequate to measure the relevant fluctuations of dye introduced into the GPT flow domain. The outcome of this evaluation resulted in establishing methodologies for the hydrodynamic measurements and gross pollutant capture/retention experiments. The hydrodynamic measurements consisted of point-based acoustic Doppler velocimeter (ADV) measurements, flow field particle image velocimetry (PIV) capture, head loss experiments and computational fluid dynamics (CFD) simulation. The gross pollutant capture/retention experiments included the use of anthropogenic litter components, tracer dye and custom modified artificial gross pollutants. Anthropogenic litter was limited to tin cans, bottle caps and plastic bags, while the artificial pollutants consisted of 40 mm spheres with a range of four buoyancies. The hydrodynamic results led to the definition of global and local flow features. The gross pollutant capture/retention results showed that when the internal retaining screens are fully blocked, the capture/retention performance of the GPT rapidly deteriorates. The overall results showed that the GPT will operate efficiently until at least 70% of the screens are blocked, particularly at high flow rates. This important finding indicates that cleaning operations could be more effectively planned when the GPT capture/retention performance deteriorates. At lower flow rates, the capture/retention performance trends were reversed. There is little difference in the poor capture/retention performance between a fully blocked GPT and a partially filled or empty GPT with 100% screen blockages. The results also revealed that the GPT is designed with an efficient high flow bypass system to avoid upstream blockages. The capture/retention performance of the GPT at medium to high inlet flow rates is close to maximum efficiency (100%). With regard to the design appraisal of the GPT, a raised inlet offers a better capture/retention performance, particularly at lower flow rates. Further design appraisals of the GPT are recommended.
Resumo:
Information Overload and Mismatch are two fundamental problems affecting the effectiveness of information filtering systems. Even though both term-based and patternbased approaches have been proposed to address the problems of overload and mismatch, neither of these approaches alone can provide a satisfactory solution to address these problems. This paper presents a novel two-stage information filtering model which combines the merits of term-based and pattern-based approaches to effectively filter sheer volume of information. In particular, the first filtering stage is supported by a novel rough analysis model which efficiently removes a large number of irrelevant documents, thereby addressing the overload problem. The second filtering stage is empowered by a semantically rich pattern taxonomy mining model which effectively fetches incoming documents according to the specific information needs of a user, thereby addressing the mismatch problem. The experimental results based on the RCV1 corpus show that the proposed twostage filtering model significantly outperforms the both termbased and pattern-based information filtering models.
Resumo:
Aims--Telemonitoring (TM) and structured telephone support (STS) have the potential to deliver specialised management to more patients with chronic heart failure (CHF), but their efficacy is still to be proven. Objectives To review randomised controlled trials (RCTs) of TM or STS on all- cause mortality and all-cause and CHF-related hospitalisations in patients with CHF, as a non-invasive remote model of specialised disease-management intervention.--Methods and Results--Data sources:We searched 15 electronic databases and hand-searched bibliographies of relevant studies, systematic reviews, and meeting abstracts. Two reviewers independently extracted all data. Study eligibility and participants: We included any randomised controlled trials (RCT) comparing TM or STS to usual care of patients with CHF. Studies that included intensified management with additional home or clinic visits were excluded. Synthesis: Primary outcomes (mortality and hospitalisations) were analysed; secondary outcomes (cost, length of stay, quality of life) were tabulated.--Results: Thirty RCTs of STS and TM were identified (25 peer-reviewed publications (n=8,323) and five abstracts (n=1,482)). Of the 25 peer-reviewed studies, 11 evaluated TM (2,710 participants), 16 evaluated STS (5,613 participants) and two tested both interventions. TM reduced all-cause mortality (risk ratio (RR 0•66 [95% CI 0•54-0•81], p<0•0001) and STS showed similar trends (RR 0•88 [95% CI 0•76-1•01], p=0•08). Both TM (RR 0•79 [95% CI 0•67-0•94], p=0•008) and STS (RR 0•77 [95% CI 0•68-0•87], p<0•0001) reduced CHF-related hospitalisations. Both interventions improved quality of life, reduced costs, and were acceptable to patients. Improvements in prescribing, patient-knowledge and self-care, and functional class were observed.--Conclusion: TM and STS both appear effective interventions to improve outcomes in patients with CHF.
Resumo:
Universities are wanting to drive research performance to new levels to increase competitiveness and secure additional research funding. Information technology departments, libraries and research offices are being tasked with the triple role of developing infrastructure, introducing new services, and raising researchers’ awareness and skill levels in the uptake of these services and related eResearch concepts in order to achieve institutional goals. The purpose of this poster is to provide an overview of the coordinated approach to the provision of research skills workshops and seminars provided to researchers and higher degree research (HDR) students at QUT. Seminars and workshops are provided by the Library in collaboration with High Performance Computing and Research Support (HPC) and the Research Students Centre. The sessions are findable and bookable via the Library’s KickStart system. A list of session topics is provided. The Research Support services web site brings together information on a range of research support services provided by the Library and HPC. Seminars and workshops are also available via a research training calendar system into which all sessions are populated, regardless of the provider. The Library and HPC are also undertaking a train the trainer program.
Resumo:
Impedance cardiography is an application of bioimpedance analysis primarily used in a research setting to determine cardiac output. It is a non invasive technique that measures the change in the impedance of the thorax which is attributed to the ejection of a volume of blood from the heart. The cardiac output is calculated from the measured impedance using the parallel conductor theory and a constant value for the resistivity of blood. However, the resistivity of blood has been shown to be velocity dependent due to changes in the orientation of red blood cells induced by changing shear forces during flow. The overall goal of this thesis was to study the effect that flow deviations have on the electrical impedance of blood, both experimentally and theoretically, and to apply the results to a clinical setting. The resistivity of stationary blood is isotropic as the red blood cells are randomly orientated due to Brownian motion. In the case of blood flowing through rigid tubes, the resistivity is anisotropic due to the biconcave discoidal shape and orientation of the cells. The generation of shear forces across the width of the tube during flow causes the cells to align with the minimal cross sectional area facing the direction of flow. This is in order to minimise the shear stress experienced by the cells. This in turn results in a larger cross sectional area of plasma and a reduction in the resistivity of the blood as the flow increases. Understanding the contribution of this effect on the thoracic impedance change is a vital step in achieving clinical acceptance of impedance cardiography. Published literature investigates the resistivity variations for constant blood flow. In this case, the shear forces are constant and the impedance remains constant during flow at a magnitude which is less than that for stationary blood. The research presented in this thesis, however, investigates the variations in resistivity of blood during pulsataile flow through rigid tubes and the relationship between impedance, velocity and acceleration. Using rigid tubes isolates the impedance change to variations associated with changes in cell orientation only. The implications of red blood cell orientation changes for clinical impedance cardiography were also explored. This was achieved through measurement and analysis of the experimental impedance of pulsatile blood flowing through rigid tubes in a mock circulatory system. A novel theoretical model including cell orientation dynamics was developed for the impedance of pulsatile blood through rigid tubes. The impedance of flowing blood was theoretically calculated using analytical methods for flow through straight tubes and the numerical Lattice Boltzmann method for flow through complex geometries such as aortic valve stenosis. The result of the analytical theoretical model was compared to the experimental impedance measurements through rigid tubes. The impedance calculated for flow through a stenosis using the Lattice Boltzmann method provides results for comparison with impedance cardiography measurements collected as part of a pilot clinical trial to assess the suitability of using bioimpedance techniques to assess the presence of aortic stenosis. The experimental and theoretical impedance of blood was shown to inversely follow the blood velocity during pulsatile flow with a correlation of -0.72 and -0.74 respectively. The results for both the experimental and theoretical investigations demonstrate that the acceleration of the blood is an important factor in determining the impedance, in addition to the velocity. During acceleration, the relationship between impedance and velocity is linear (r2 = 0.98, experimental and r2 = 0.94, theoretical). The relationship between the impedance and velocity during the deceleration phase is characterised by a time decay constant, ô , ranging from 10 to 50 s. The high level of agreement between the experimental and theoretically modelled impedance demonstrates the accuracy of the model developed here. An increase in the haematocrit of the blood resulted in an increase in the magnitude of the impedance change due to changes in the orientation of red blood cells. The time decay constant was shown to decrease linearly with the haematocrit for both experimental and theoretical results, although the slope of this decrease was larger in the experimental case. The radius of the tube influences the experimental and theoretical impedance given the same velocity of flow. However, when the velocity was divided by the radius of the tube (labelled the reduced average velocity) the impedance response was the same for two experimental tubes with equivalent reduced average velocity but with different radii. The temperature of the blood was also shown to affect the impedance with the impedance decreasing as the temperature increased. These results are the first published for the impedance of pulsatile blood. The experimental impedance change measured orthogonal to the direction of flow is in the opposite direction to that measured in the direction of flow. These results indicate that the impedance of blood flowing through rigid cylindrical tubes is axisymmetric along the radius. This has not previously been verified experimentally. Time frequency analysis of the experimental results demonstrated that the measured impedance contains the same frequency components occuring at the same time point in the cycle as the velocity signal contains. This suggests that the impedance contains many of the fluctuations of the velocity signal. Application of a theoretical steady flow model to pulsatile flow presented here has verified that the steady flow model is not adequate in calculating the impedance of pulsatile blood flow. The success of the new theoretical model over the steady flow model demonstrates that the velocity profile is important in determining the impedance of pulsatile blood. The clinical application of the impedance of blood flow through a stenosis was theoretically modelled using the Lattice Boltzman method (LBM) for fluid flow through complex geometeries. The impedance of blood exiting a narrow orifice was calculated for varying degrees of stenosis. Clincial impedance cardiography measurements were also recorded for both aortic valvular stenosis patients (n = 4) and control subjects (n = 4) with structurally normal hearts. This pilot trial was used to corroborate the results of the LBM. Results from both investigations showed that the decay time constant for impedance has potential in the assessment of aortic valve stenosis. In the theoretically modelled case (LBM results), the decay time constant increased with an increase in the degree of stenosis. The clinical results also showed a statistically significant difference in time decay constant between control and test subjects (P = 0.03). The time decay constant calculated for test subjects (ô = 180 - 250 s) is consistently larger than that determined for control subjects (ô = 50 - 130 s). This difference is thought to be due to difference in the orientation response of the cells as blood flows through the stenosis. Such a non-invasive technique using the time decay constant for screening of aortic stenosis provides additional information to that currently given by impedance cardiography techniques and improves the value of the device to practitioners. However, the results still need to be verified in a larger study. While impedance cardiography has not been widely adopted clinically, it is research such as this that will enable future acceptance of the method.