939 resultados para Time components


Relevância:

20.00% 20.00%

Publicador:

Resumo:

To date, biodegradable networks and particularly their kinetic chain lengths have been characterized by analysis of their degradation products in solution. We characterize the network itself by NMR analysis in the solvent-swollen state under magic angle spinning conditions. The networks were prepared by photoinitiated cross-linking of poly(dl-lactide)−dimethacrylate macromers (5 kg/mol) in the presence of an unreactive diluent. Using diffusion filtering and 2D correlation spectroscopy techniques, all network components are identified. By quantification of network-bound photoinitiator fragments, an average kinetic chain length of 9 ± 2 methacrylate units is determined. The PDLLA macromer solution was also used with a dye to prepare computer-designed structures by stereolithography. For these networks structures, the average kinetic chain length is 24 ± 4 methacrylate units. In all cases the calculated molecular weights of the polymethacrylate chains after degradation are maximally 8.8 kg/mol, which is far below the threshold for renal clearance. Upon incubation in phosphate buffered saline at 37 °C, the networks show a similar mass loss profile in time as linear high-molecular-weight PDLLA (HMW PDLLA). The mechanical properties are preserved longer for the PDLLA networks than for HMW PDLLA. The initial tensile strength of 47 ± 2 MPa does not decrease significantly for the first 15 weeks, while HMW PDLLA lost 85 ± 5% of its strength within 5 weeks. The physical properties, kinetic chain length, and degradation profile of these photo-cross-linked PDLLA networks make them most suited materials for orthopedic applications and use in (bone) tissue engineering.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Cell invasion involves a population of cells which are motile and proliferative. Traditional discrete models of proliferation involve agents depositing daughter agents on nearest- neighbor lattice sites. Motivated by time-lapse images of cell invasion, we propose and analyze two new discrete proliferation models in the context of an exclusion process with an undirected motility mechanism. These discrete models are related to a family of reaction- diffusion equations and can be used to make predictions over a range of scales appropriate for interpreting experimental data. The new proliferation mechanisms are biologically relevant and mathematically convenient as the continuum-discrete relationship is more robust for the new proliferation mechanisms relative to traditional approaches.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In this paper we present a novel distributed coding protocol for multi-user cooperative networks. The proposed distributed coding protocol exploits the existing orthogonal space-time block codes to achieve higher diversity gain by repeating the code across time and space (available relay nodes). The achievable diversity gain depends on the number of relay nodes that can fully decode the signal from the source. These relay nodes then form space-time codes to cooperatively relay to the destination using number of time slots. However, the improved diversity gain is archived at the expense of the transmission rate. The design principles of the proposed space-time distributed code and the issues related to transmission rate and diversity trade off is discussed in detail. We show that the proposed distributed space-time coding protocol out performs existing distributed codes with a variable transmission rate.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Curriculum demands continue to increase on school education systems with teachers at the forefront of implementing syllabus requirements. Education is reported frequently as a solution to most societal problems and, as a result of the world’s information explosion, teachers are expected to cover more and more within teaching programs. How can teachers combine subjects in order to capitalise on the competing educational agendas within school timeframes? Fusing curricula requires the bonding of standards from two or more syllabuses. Both technology and ICT complement the learning of science. This study analyses selected examples of preservice teachers’ overviews for fusing science, technology and ICT. These program overviews focused on primary students and the achievement of two standards (one from science and one from either technology or ICT). These primary preservice teachers’ fused-curricula overviews included scientific concepts and related technology and/or ICT skills and knowledge. Findings indicated a range of innovative curriculum plans for teaching primary science through technology and ICT, demonstrating that these subjects can form cohesive links towards achieving the respective learning standards. Teachers can work more astutely by fusing curricula; however further professional development may be required to advance thinking about these processes. Bonding subjects through their learning standards can extend beyond previous integration or thematic work where standards may not have been assessed. Education systems need to articulate through syllabus documents how effective fusing of curricula can be achieved. It appears that education is a key avenue for addressing societal needs, problems and issues. Education is promoted as a universal solution, which has resulted in curriculum overload (Dare, Durand, Moeller, & Washington, 1997; Vinson, 2001). Societal and curriculum demands have placed added pressure on teachers with many extenuating education issues increasing teachers’ workloads (Mobilise for Public Education, 2002). For example, as Australia has weather conducive for outdoor activities, social problems and issues arise that are reported through the media calling for action; consequently schools have been involved in swimming programs, road and bicycle safety programs, and a wide range of activities that had been considered a parental responsibility in the past. Teachers are expected to plan, implement and assess these extra-curricula activities within their already overcrowded timetables. At the same stage, key learning areas (KLAs) such as science and technology are mandatory requirements within all Australian education systems. These systems have syllabuses outlining levels of content and the anticipated learning outcomes (also known as standards, essential learnings, and frameworks). Time allocated for teaching science in obviously an issue. In 2001, it was estimated that on average the time spent in teaching science in Australian Primary Schools was almost an hour per week (Goodrum, Hackling, & Rennie, 2001). More recently, a study undertaken in the U.S. reported a similar finding. More than 80% of the teachers in K-5 classrooms spent less than an hour teaching science (Dorph, Goldstein, Lee, et al., 2007). More importantly, 16% did not spend teaching science in their classrooms. Teachers need to learn to work smarter by optimising the use of their in-class time. Integration is proposed as one of the ways to address the issue of curriculum overload (Venville & Dawson, 2005; Vogler, 2003). Even though there may be a lack of definition for integration (Hurley, 2001), curriculum integration aims at covering key concepts in two or more subject areas within the same lesson (Buxton & Whatley, 2002). This implies covering the curriculum in less time than if the subjects were taught separately; therefore teachers should have more time to cover other educational issues. Expectedly, the reality can be decidedly different (e.g., Brophy & Alleman, 1991; Venville & Dawson, 2005). Nevertheless, teachers report that students expand their knowledge and skills as a result of subject integration (James, Lamb, Householder, & Bailey, 2000). There seems to be considerable value for integrating science with other KLAs besides aiming to address teaching workloads. Over two decades ago, Cohen and Staley (1982) claimed that integration can bring a subject into the primary curriculum that may be otherwise left out. Integrating science education aims to develop a more holistic perspective. Indeed, life is not neat components of stand-alone subjects; life integrates subject content in numerous ways, and curriculum integration can assist students to make these real-life connections (Burnett & Wichman, 1997). Science integration can provide the scope for real-life learning and the possibility of targeting students’ learning styles more effectively by providing more than one perspective (Hudson & Hudson, 2001). To illustrate, technology is essential to science education (Blueford & Rosenbloom, 2003; Board of Studies, 1999; Penick, 2002), and constructing technology immediately evokes a social purpose for such construction (Marker, 1992). For example, building a model windmill requires science and technology (Zubrowski, 2002) but has a key focus on sustainability and the social sciences. Science has the potential to be integrated with all KLAs (e.g., Cohen & Staley, 1982; Dobbs, 1995; James et al., 2000). Yet, “integration” appears to be a confusing term. Integration has an educational meaning focused on special education students being assimilated into mainstream classrooms. The word integration was used in the late seventies and generally focused around thematic approaches for teaching. For instance, a science theme about flight only has to have a student drawing a picture of plane to show integration; it did not connect the anticipated outcomes from science and art. The term “fusing curricula” presents a seamless bonding between two subjects; hence standards (or outcomes) need to be linked from both subjects. This also goes beyond just embedding one subject within another. Embedding implies that one subject is dominant, while fusing curricula proposes an equal mix of learning within both subject areas. Primary education in Queensland has eight KLAs, each with its established content and each with a proposed structure for levels of learning. Primary teachers attempt to cover these syllabus requirements across the eight KLAs in less than five hours a day, and between many of the extra-curricula activities occurring throughout a school year (e.g., Easter activities, Education Week, concerts, excursions, performances). In Australia, education systems have developed standards for all KLAs (e.g., Education Queensland, NSW Department of Education and Training, Victorian Education) usually designated by a code. In the late 1990’s (in Queensland), “core learning outcomes” for strands across all KLA’s. For example, LL2.1 for the Queensland Education science syllabus means Life and Living at Level 2 standard number 1. Thus, a teacher’s planning requires the inclusion of standards as indicated by the presiding syllabus. More recently, the core learning outcomes were replaced by “essential learnings”. They specify “what students should be taught and what is important for students to have opportunities to know, understand and be able to do” (Queensland Studies Authority, 2009, para. 1). Fusing science education with other KLAs may facilitate more efficient use of time and resources; however this type of planning needs to combine standards from two syllabuses. To further assist in facilitating sound pedagogical practices, there are models proposed for learning science, technology and other KLAs such as Bloom’s Taxonomy (Bloom, 1956), Productive Pedagogies (Education Queensland, 2004), de Bono’s Six Hats (de Bono, 1985), and Gardner’s Multiple Intelligences (Gardner, 1999) that imply, warrant, or necessitate fused curricula. Bybee’s 5 Es, for example, has five levels of learning (engage, explore, explain, elaborate, and evaluate; Bybee, 1997) can have the potential for fusing science and ICT standards.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Background: Many studies have illustrated that ambient air pollution negatively impacts on health. However, little evidence is available for the effects of air pollution on cardiovascular mortality (CVM) in Tianjin, China. Also, no study has examined which strata length for the time-stratified case–crossover analysis gives estimates that most closely match the estimates from time series analysis. Objectives: The purpose of this study was to estimate the effects of air pollutants on CVM in Tianjin, China, and compare time-stratified case–crossover and time series analyses. Method: A time-stratified case–crossover and generalized additive model (time series) were applied to examine the impact of air pollution on CVM from 2005 to 2007. Four time-stratified case–crossover analyses were used by varying the stratum length (Calendar month, 28, 21 or 14 days). Jackknifing was used to compare the methods. Residual analysis was used to check whether the models fitted well. Results: Both case–crossover and time series analyses show that air pollutants (PM10, SO2 and NO2) were positively associated with CVM. The estimates from the time-stratified case–crossover varied greatly with changing strata length. The estimates from the time series analyses varied slightly with changing degrees of freedom per year for time. The residuals from the time series analyses had less autocorrelation than those from the case–crossover analyses indicating a better fit. Conclusion: Air pollution was associated with an increased risk of CVM in Tianjin, China. Time series analyses performed better than the time-stratified case–crossover analyses in terms of residual checking.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Background: A number of epidemiological studies have been conducted to research the adverse effects of air pollution on mortality and morbidity. Hypertension is the most important risk factor for cardiovascular mortality. However, few previous studies have examined the relationship between gaseous air pollution and morbidity for hypertension. ---------- Methods: Daily data on emergency hospital visits (EHVs) for hypertension were collected from the Peking University Third Hospital. Daily data on gaseous air pollutants (sulfur dioxide (SO2) and nitrogen dioxide (NO2)) and particulate matter less than 10 μm in aerodynamic diameter (PM10) were collected from the Beijing Municipal Environmental Monitoring Center. A time-stratified case-crossover design was conducted to evaluate the relationship between urban gaseous air pollution and EHVs for hypertension. Temperature and relative humidity were controlled for. ---------- Results: In the single air pollutant models, a 10 μg/m3 increase in SO2 and NO2 were significantly associated with EHVs for hypertension. The odds ratios (ORs) were 1.037 (95% confidence interval (CI): 1.004-1.071) for SO2 at lag 0 day, and 1.101 (95% CI: 1.038-1.168) for NO2 at lag 3 day. After controlling for PM10, the ORs associated with SO2 and NO2 were 1.025 (95% CI: 0.987-1.065) and 1.114 (95% CI: 1.037-1.195), respectively.---------- Conclusion: Elevated urban gaseous air pollution was associated with increased EHVs for hypertension in Beijing, China.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Background: Palliative care should be provided according to the individual needs of the patient, caregiver and family, so that the type and level of care provided, as well as the setting in which it is delivered, are dependent on the complexity and severity of individual needs, rather than prognosis or diagnosis. This paper presents a study designed to assess the feasibility and efficacy of an intervention to assist in the allocation of palliative care resources according to need, within the context of a population of people with advanced cancer. ---------- Methods/design: People with advanced cancer and their caregivers completed bi-monthly telephone interviews over a period of up to 18 months to assess unmet needs, anxiety and depression, quality of life, satisfaction with care and service utilisation. The intervention, introduced after at least two baseline phone interviews, involved a) training medical, nursing and allied health professionals at each recruitment site on the use of the Palliative Care Needs Assessment Guidelines and the Needs Assessment Tool: Progressive Disease - Cancer (NAT: PD-C); b) health professionals completing the NAT: PD-C with participating patients approximately monthly for the rest of the study period. Changes in outcomes will be compared pre-and post-intervention.---------- Discussion: The study will determine whether the routine, systematic and regular use of the Guidelines and NAT: PD-C in a range of clinical settings is a feasible and effective strategy for facilitating the timely provision of needs based care.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

A number of advanced driver assistance systems (ADAS) are currently being released on the market, providing safety functions to the drivers such as collision avoidance, adaptive cruise control or enhanced night-vision. These systems however are inherently limited by their sensory range: they cannot gather information from outside this range, also called their “perceptive horizon”. Cooperative systems are a developing research avenue that aims at providing extended safety and comfort functionalities by introducing vehicle-to-vehicle (V2V) and vehicle-to-infrastructure (V2I) wireless communications to the road actors. This paper presents the problematic of cooperative systems, their advantages and contributions to road safety and exposes some limitations related to market penetration, sensors accuracy and communications scalability. It explains the issues of how to implement extended perception, a central contribution of cooperative systems. The initial steps of an evaluation of data fusion architectures for extended perception are exposed.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The present rate of technological advance continues to place significant demands on data storage devices. The sheer amount of digital data being generated each year along with consumer expectations, fuels these demands. At present, most digital data is stored magnetically, in the form of hard disk drives or on magnetic tape. The increase in areal density (AD) of magnetic hard disk drives over the past 50 years has been of the order of 100 million times, and current devices are storing data at ADs of the order of hundreds of gigabits per square inch. However, it has been known for some time that the progress in this form of data storage is approaching fundamental limits. The main limitation relates to the lower size limit that an individual bit can have for stable storage. Various techniques for overcoming these fundamental limits are currently the focus of considerable research effort. Most attempt to improve current data storage methods, or modify these slightly for higher density storage. Alternatively, three dimensional optical data storage is a promising field for the information storage needs of the future, offering very high density, high speed memory. There are two ways in which data may be recorded in a three dimensional optical medium; either bit-by-bit (similar in principle to an optical disc medium such as CD or DVD) or by using pages of bit data. Bit-by-bit techniques for three dimensional storage offer high density but are inherently slow due to the serial nature of data access. Page-based techniques, where a two-dimensional page of data bits is written in one write operation, can offer significantly higher data rates, due to their parallel nature. Holographic Data Storage (HDS) is one such page-oriented optical memory technique. This field of research has been active for several decades, but with few commercial products presently available. Another page-oriented optical memory technique involves recording pages of data as phase masks in a photorefractive medium. A photorefractive material is one by which the refractive index can be modified by light of the appropriate wavelength and intensity, and this property can be used to store information in these materials. In phase mask storage, two dimensional pages of data are recorded into a photorefractive crystal, as refractive index changes in the medium. A low-intensity readout beam propagating through the medium will have its intensity profile modified by these refractive index changes and a CCD camera can be used to monitor the readout beam, and thus read the stored data. The main aim of this research was to investigate data storage using phase masks in the photorefractive crystal, lithium niobate (LiNbO3). Firstly the experimental methods for storing the two dimensional pages of data (a set of vertical stripes of varying lengths) in the medium are presented. The laser beam used for writing, whose intensity profile is modified by an amplitudemask which contains a pattern of the information to be stored, illuminates the lithium niobate crystal and the photorefractive effect causes the patterns to be stored as refractive index changes in the medium. These patterns are read out non-destructively using a low intensity probe beam and a CCD camera. A common complication of information storage in photorefractive crystals is the issue of destructive readout. This is a problem particularly for holographic data storage, where the readout beam should be at the same wavelength as the beam used for writing. Since the charge carriers in the medium are still sensitive to the read light field, the readout beam erases the stored information. A method to avoid this is by using thermal fixing. Here the photorefractive medium is heated to temperatures above 150�C; this process forms an ionic grating in the medium. This ionic grating is insensitive to the readout beam and therefore the information is not erased during readout. A non-contact method for determining temperature change in a lithium niobate crystal is presented in this thesis. The temperature-dependent birefringent properties of the medium cause intensity oscillations to be observed for a beam propagating through the medium during a change in temperature. It is shown that each oscillation corresponds to a particular temperature change, and by counting the number of oscillations observed, the temperature change of the medium can be deduced. The presented technique for measuring temperature change could easily be applied to a situation where thermal fixing of data in a photorefractive medium is required. Furthermore, by using an expanded beam and monitoring the intensity oscillations over a wide region, it is shown that the temperature in various locations of the crystal can be monitored simultaneously. This technique could be used to deduce temperature gradients in the medium. It is shown that the three dimensional nature of the recording medium causes interesting degradation effects to occur when the patterns are written for a longer-than-optimal time. This degradation results in the splitting of the vertical stripes in the data pattern, and for long writing exposure times this process can result in the complete deterioration of the information in the medium. It is shown in that simply by using incoherent illumination, the original pattern can be recovered from the degraded state. The reason for the recovery is that the refractive index changes causing the degradation are of a smaller magnitude since they are induced by the write field components scattered from the written structures. During incoherent erasure, the lower magnitude refractive index changes are neutralised first, allowing the original pattern to be recovered. The degradation process is shown to be reversed during the recovery process, and a simple relationship is found relating the time at which particular features appear during degradation and recovery. A further outcome of this work is that the minimum stripe width of 30 ìm is required for accurate storage and recovery of the information in the medium, any size smaller than this results in incomplete recovery. The degradation and recovery process could be applied to an application in image scrambling or cryptography for optical information storage. A two dimensional numerical model based on the finite-difference beam propagation method (FD-BPM) is presented and used to gain insight into the pattern storage process. The model shows that the degradation of the patterns is due to the complicated path taken by the write beam as it propagates through the crystal, and in particular the scattering of this beam from the induced refractive index structures in the medium. The model indicates that the highest quality pattern storage would be achieved with a thin 0.5 mm medium; however this type of medium would also remove the degradation property of the patterns and the subsequent recovery process. To overcome the simplistic treatment of the refractive index change in the FD-BPM model, a fully three dimensional photorefractive model developed by Devaux is presented. This model shows significant insight into the pattern storage, particularly for the degradation and recovery process, and confirms the theory that the recovery of the degraded patterns is possible since the refractive index changes responsible for the degradation are of a smaller magnitude. Finally, detailed analysis of the pattern formation and degradation dynamics for periodic patterns of various periodicities is presented. It is shown that stripe widths in the write beam of greater than 150 ìm result in the formation of different types of refractive index changes, compared with the stripes of smaller widths. As a result, it is shown that the pattern storage method discussed in this thesis has an upper feature size limit of 150 ìm, for accurate and reliable pattern storage.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Using sculpture and drawing as my primary methods of investigation, this research explores ways of shifting the emphasis of my creative visual arts practice from object to process whilst still maintaining a primacy of material outcomes. My motivation was to locate ways of developing a sustained practice shaped as much by new works, as by a creative flow between works. I imagined a practice where a logic of structure within discrete forms and a logic of the broader practice might be developed as mutually informed processes. Using basic structural components of multiple wooden curves and linear modes of deployment – in both sculptures and drawings – I have identified both emergence theory and the image of rhizomic growth (Deleuze and Guattari, 1987) as theoretically integral to this imagining of a creative practice, both in terms of critiquing and developing works. Whilst I adopt a formalist approach for this exegesis, the emergence and rhizome models allow it to work as a critique of movement, of becoming and changing, rather than merely a formalism of static structure. In these models, therefore, I have identified a formal approach that can be applied not only to objects, but to practice over time. The thorough reading and application of these ontological models (emergence and rhizome) to visual arts practice, in terms of processes, objects and changes, is the primary contribution of this thesis. The works that form the major component of the research develop, reflect and embody these notions of movement and change.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This research shows that gross pollutant traps (GPTs) continue to play an important role in preventing visible street waste—gross pollutants—from contaminating the environment. The demand for these GPTs calls for stringent quality control and this research provides a foundation to rigorously examine the devices. A novel and comprehensive testing approach to examine a dry sump GPT was developed. The GPT is designed with internal screens to capture gross pollutants—organic matter and anthropogenic litter. This device has not been previously investigated. Apart from the review of GPTs and gross pollutant data, the testing approach includes four additional aspects to this research, which are: field work and an historical overview of street waste/stormwater pollution, calibration of equipment, hydrodynamic studies and gross pollutant capture/retention investigations. This work is the first comprehensive investigation of its kind and provides valuable practical information for the current research and any future work pertaining to the operations of GPTs and management of street waste in the urban environment. Gross pollutant traps—including patented and registered designs developed by industry—have specific internal configurations and hydrodynamic separation characteristics which demand individual testing and performance assessments. Stormwater devices are usually evaluated by environmental protection agencies (EPAs), professional bodies and water research centres. In the USA, the American Society of Civil Engineers (ASCE) and the Environmental Water Resource Institute (EWRI) are examples of professional and research organisations actively involved in these evaluation/verification programs. These programs largely rely on field evaluations alone that are limited in scope, mainly for cost and logistical reasons. In Australia, evaluation/verification programs of new devices in the stormwater industry are not well established. The current limitations in the evaluation methodologies of GPTs have been addressed in this research by establishing a new testing approach. This approach uses a combination of physical and theoretical models to examine in detail the hydrodynamic and capture/retention characteristics of the GPT. The physical model consisted of a 50% scale model GPT rig with screen blockages varying from 0 to 100%. This rig was placed in a 20 m flume and various inlet and outflow operating conditions were modelled on observations made during the field monitoring of GPTs. Due to infrequent cleaning, the retaining screens inside the GPTs were often observed to be blocked with organic matter. Blocked screens can radically change the hydrodynamic and gross pollutant capture/retention characteristics of a GPT as shown from this research. This research involved the use of equipment, such as acoustic Doppler velocimeters (ADVs) and dye concentration (Komori) probes, which were deployed for the first time in a dry sump GPT. Hence, it was necessary to rigorously evaluate the capability and performance of these devices, particularly in the case of the custom made Komori probes, about which little was known. The evaluation revealed that the Komori probes have a frequency response of up to 100 Hz —which is dependent upon fluid velocities—and this was adequate to measure the relevant fluctuations of dye introduced into the GPT flow domain. The outcome of this evaluation resulted in establishing methodologies for the hydrodynamic measurements and gross pollutant capture/retention experiments. The hydrodynamic measurements consisted of point-based acoustic Doppler velocimeter (ADV) measurements, flow field particle image velocimetry (PIV) capture, head loss experiments and computational fluid dynamics (CFD) simulation. The gross pollutant capture/retention experiments included the use of anthropogenic litter components, tracer dye and custom modified artificial gross pollutants. Anthropogenic litter was limited to tin cans, bottle caps and plastic bags, while the artificial pollutants consisted of 40 mm spheres with a range of four buoyancies. The hydrodynamic results led to the definition of global and local flow features. The gross pollutant capture/retention results showed that when the internal retaining screens are fully blocked, the capture/retention performance of the GPT rapidly deteriorates. The overall results showed that the GPT will operate efficiently until at least 70% of the screens are blocked, particularly at high flow rates. This important finding indicates that cleaning operations could be more effectively planned when the GPT capture/retention performance deteriorates. At lower flow rates, the capture/retention performance trends were reversed. There is little difference in the poor capture/retention performance between a fully blocked GPT and a partially filled or empty GPT with 100% screen blockages. The results also revealed that the GPT is designed with an efficient high flow bypass system to avoid upstream blockages. The capture/retention performance of the GPT at medium to high inlet flow rates is close to maximum efficiency (100%). With regard to the design appraisal of the GPT, a raised inlet offers a better capture/retention performance, particularly at lower flow rates. Further design appraisals of the GPT are recommended.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Aims--Telemonitoring (TM) and structured telephone support (STS) have the potential to deliver specialised management to more patients with chronic heart failure (CHF), but their efficacy is still to be proven. Objectives To review randomised controlled trials (RCTs) of TM or STS on all- cause mortality and all-cause and CHF-related hospitalisations in patients with CHF, as a non-invasive remote model of specialised disease-management intervention.--Methods and Results--Data sources:We searched 15 electronic databases and hand-searched bibliographies of relevant studies, systematic reviews, and meeting abstracts. Two reviewers independently extracted all data. Study eligibility and participants: We included any randomised controlled trials (RCT) comparing TM or STS to usual care of patients with CHF. Studies that included intensified management with additional home or clinic visits were excluded. Synthesis: Primary outcomes (mortality and hospitalisations) were analysed; secondary outcomes (cost, length of stay, quality of life) were tabulated.--Results: Thirty RCTs of STS and TM were identified (25 peer-reviewed publications (n=8,323) and five abstracts (n=1,482)). Of the 25 peer-reviewed studies, 11 evaluated TM (2,710 participants), 16 evaluated STS (5,613 participants) and two tested both interventions. TM reduced all-cause mortality (risk ratio (RR 0•66 [95% CI 0•54-0•81], p<0•0001) and STS showed similar trends (RR 0•88 [95% CI 0•76-1•01], p=0•08). Both TM (RR 0•79 [95% CI 0•67-0•94], p=0•008) and STS (RR 0•77 [95% CI 0•68-0•87], p<0•0001) reduced CHF-related hospitalisations. Both interventions improved quality of life, reduced costs, and were acceptable to patients. Improvements in prescribing, patient-knowledge and self-care, and functional class were observed.--Conclusion: TM and STS both appear effective interventions to improve outcomes in patients with CHF.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Impedance cardiography is an application of bioimpedance analysis primarily used in a research setting to determine cardiac output. It is a non invasive technique that measures the change in the impedance of the thorax which is attributed to the ejection of a volume of blood from the heart. The cardiac output is calculated from the measured impedance using the parallel conductor theory and a constant value for the resistivity of blood. However, the resistivity of blood has been shown to be velocity dependent due to changes in the orientation of red blood cells induced by changing shear forces during flow. The overall goal of this thesis was to study the effect that flow deviations have on the electrical impedance of blood, both experimentally and theoretically, and to apply the results to a clinical setting. The resistivity of stationary blood is isotropic as the red blood cells are randomly orientated due to Brownian motion. In the case of blood flowing through rigid tubes, the resistivity is anisotropic due to the biconcave discoidal shape and orientation of the cells. The generation of shear forces across the width of the tube during flow causes the cells to align with the minimal cross sectional area facing the direction of flow. This is in order to minimise the shear stress experienced by the cells. This in turn results in a larger cross sectional area of plasma and a reduction in the resistivity of the blood as the flow increases. Understanding the contribution of this effect on the thoracic impedance change is a vital step in achieving clinical acceptance of impedance cardiography. Published literature investigates the resistivity variations for constant blood flow. In this case, the shear forces are constant and the impedance remains constant during flow at a magnitude which is less than that for stationary blood. The research presented in this thesis, however, investigates the variations in resistivity of blood during pulsataile flow through rigid tubes and the relationship between impedance, velocity and acceleration. Using rigid tubes isolates the impedance change to variations associated with changes in cell orientation only. The implications of red blood cell orientation changes for clinical impedance cardiography were also explored. This was achieved through measurement and analysis of the experimental impedance of pulsatile blood flowing through rigid tubes in a mock circulatory system. A novel theoretical model including cell orientation dynamics was developed for the impedance of pulsatile blood through rigid tubes. The impedance of flowing blood was theoretically calculated using analytical methods for flow through straight tubes and the numerical Lattice Boltzmann method for flow through complex geometries such as aortic valve stenosis. The result of the analytical theoretical model was compared to the experimental impedance measurements through rigid tubes. The impedance calculated for flow through a stenosis using the Lattice Boltzmann method provides results for comparison with impedance cardiography measurements collected as part of a pilot clinical trial to assess the suitability of using bioimpedance techniques to assess the presence of aortic stenosis. The experimental and theoretical impedance of blood was shown to inversely follow the blood velocity during pulsatile flow with a correlation of -0.72 and -0.74 respectively. The results for both the experimental and theoretical investigations demonstrate that the acceleration of the blood is an important factor in determining the impedance, in addition to the velocity. During acceleration, the relationship between impedance and velocity is linear (r2 = 0.98, experimental and r2 = 0.94, theoretical). The relationship between the impedance and velocity during the deceleration phase is characterised by a time decay constant, ô , ranging from 10 to 50 s. The high level of agreement between the experimental and theoretically modelled impedance demonstrates the accuracy of the model developed here. An increase in the haematocrit of the blood resulted in an increase in the magnitude of the impedance change due to changes in the orientation of red blood cells. The time decay constant was shown to decrease linearly with the haematocrit for both experimental and theoretical results, although the slope of this decrease was larger in the experimental case. The radius of the tube influences the experimental and theoretical impedance given the same velocity of flow. However, when the velocity was divided by the radius of the tube (labelled the reduced average velocity) the impedance response was the same for two experimental tubes with equivalent reduced average velocity but with different radii. The temperature of the blood was also shown to affect the impedance with the impedance decreasing as the temperature increased. These results are the first published for the impedance of pulsatile blood. The experimental impedance change measured orthogonal to the direction of flow is in the opposite direction to that measured in the direction of flow. These results indicate that the impedance of blood flowing through rigid cylindrical tubes is axisymmetric along the radius. This has not previously been verified experimentally. Time frequency analysis of the experimental results demonstrated that the measured impedance contains the same frequency components occuring at the same time point in the cycle as the velocity signal contains. This suggests that the impedance contains many of the fluctuations of the velocity signal. Application of a theoretical steady flow model to pulsatile flow presented here has verified that the steady flow model is not adequate in calculating the impedance of pulsatile blood flow. The success of the new theoretical model over the steady flow model demonstrates that the velocity profile is important in determining the impedance of pulsatile blood. The clinical application of the impedance of blood flow through a stenosis was theoretically modelled using the Lattice Boltzman method (LBM) for fluid flow through complex geometeries. The impedance of blood exiting a narrow orifice was calculated for varying degrees of stenosis. Clincial impedance cardiography measurements were also recorded for both aortic valvular stenosis patients (n = 4) and control subjects (n = 4) with structurally normal hearts. This pilot trial was used to corroborate the results of the LBM. Results from both investigations showed that the decay time constant for impedance has potential in the assessment of aortic valve stenosis. In the theoretically modelled case (LBM results), the decay time constant increased with an increase in the degree of stenosis. The clinical results also showed a statistically significant difference in time decay constant between control and test subjects (P = 0.03). The time decay constant calculated for test subjects (ô = 180 - 250 s) is consistently larger than that determined for control subjects (ô = 50 - 130 s). This difference is thought to be due to difference in the orientation response of the cells as blood flows through the stenosis. Such a non-invasive technique using the time decay constant for screening of aortic stenosis provides additional information to that currently given by impedance cardiography techniques and improves the value of the device to practitioners. However, the results still need to be verified in a larger study. While impedance cardiography has not been widely adopted clinically, it is research such as this that will enable future acceptance of the method.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Catheter associated urinary tract infections (CAUTI) are a worldwide problem that may lead to increased patient morbidity, cost and mortality.1e3 The literature is divided on whether there are real effects from CAUTI on length of stay or mortality. Platt4 found the costs and mortality risks to be largeyetGraves et al found the opposite.5 A reviewof the published estimates of the extra length of stay showed results between zero and 30 days.6 The differences in estimates may have been caused by the different epidemiological methods applied. Accurately estimating the effects of CAUTI is difficult because it is a time-dependent exposure. This means that standard statistical techniques, such asmatched case-control studies, tend to overestimate the increased hospital stay and mortality risk due to infection. The aim of the study was to estimate excess length of stay andmortality in an intensive care unit (ICU) due to a CAUTI, using a statistical model that accounts for the timing of infection. Data collected from ICU units in lower and middle income countries were used for this analysis.7,8 There has been little research for these settings, hence the need for this paper.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Most research on numerical development in children is behavioural, focusing on accuracy and response time in different problem formats. However, Temple and Posner (1998) used ERPs and the numerical distance task with 5-year-olds to show that the development of numerical representations is difficult to disentangle from the development of the executive components of response organization and execution. Here we use the numerical Stroop paradigm (NSP) and ERPs to study possible executive interference in numerical processing tasks in 6–8-year-old children. In the NSP, the numerical magnitude of the digits is task-relevant and the physical size of the digits is task-irrelevant. We show that younger children are highly susceptible to interference from irrelevant physical information such as digit size, but that access to the numerical representation is almost as fast in young children as in adults. We argue that the developmental trajectories for executive function and numerical processing may act together to determine numerical development in young children.