947 resultados para REDUCED-ORDER
Resumo:
Background Despite the recognition of obesity in young people as a key health issue, there is limited evidence to inform health professionals regarding the most appropriate treatment options. The Eat Smart study aims to contribute to the knowledge base of effective dietary strategies for the clinical management of the obese adolescent and examine the cardiometablic effects of a reduced carbohydrate diet versus a low fat diet. Methods and design Eat Smart is a randomised controlled trial and aims to recruit 100 adolescents over a 2½ year period. Families will be invited to participate following referral by their health professional who has recommended weight management. Participants will be overweight as defined by a body mass index (BMI) greater than the 90th percentile, using CDC 2000 growth charts. An accredited 6-week psychological life skills program ‘FRIENDS for Life’, which is designed to provide behaviour change and coping skills will be undertaken prior to volunteers being randomised to group. The intervention arms include a structured reduced carbohydrate or a structured low fat dietary program based on an individualised energy prescription. The intervention will involve a series of dietetic appointments over 24 weeks. The control group will commence the dietary program of their choice after a 12 week period. Outcome measures will be assessed at baseline, week 12 and week 24. The primary outcome measure will be change in BMI z-score. A range of secondary outcome measures including body composition, lipid fractions, inflammatory markers, social and psychological measures will be measured. Discussion The chronic and difficult nature of treating the obese adolescent is increasingly recognised by clinicians and has highlighted the need for research aimed at providing effective intervention strategies, particularly for use in the tertiary setting. A structured reduced carbohydrate approach may provide a dietary pattern that some families will find more sustainable and effective than the conventional low fat dietary approach currently advocated. This study aims to investigate the acceptability and effectiveness of a structured reduced dietary carbohydrate intervention and will compare the outcomes of this approach with a structured low fat eating plan. Trial Registration: The protocol for this study is registered with the International Clinical Trials Registry (ISRCTN49438757).
Resumo:
Purpose: We compared subjective blur limits for defocus and the higher-order aberrations of coma, trefoil, and spherical aberration. ---------- Methods: Spherical aberration was presented in both Zernike and Seidel forms. Black letter targets (0.1, 0.35, and 0.6 logMAR) on white backgrounds were blurred using an adaptive optics system for six subjects under cycloplegia with 5 mm artificial pupils. Three blur criteria of just noticeable, just troublesome, and just objectionable were used.---------- Results: When expressed as wave aberration coefficients, the just noticeable blur limits for coma and trefoil were similar to those for defocus, whereas the just noticeable limits for Zernike spherical aberration and Seidel spherical aberration (the latter given as an “rms equivalent”) were considerably smaller and larger, respectively, than defocus limits.---------- Conclusions: Blur limits increased more quickly for the higher order aberrations than for defocus as the criterion changed from just noticeable to just troublesome and then to just objectionable.
Resumo:
PURPOSE: To determine if participants with normal visual acuity, no ophthalmoscopically signs of age-related maculopathy (ARM) in both eyes and who are carriers of the CFH, LOC387715 and HRTA1 high-risk genotypes (“gene-positive”) have impaired rod- and cone-mediated mesopic visual function compared to persons who do not carry the risk genotypes (“gene-negative”).---------- METHODS: Fifty-three Caucasian study participants (mean 55.8 ± 6.1) were genotyped for CFH, LOC387715/ARMS2 and HRTA1 polymorphisms. We genotyped single nucleotide polymorphisms (SNPs) in the CFH (rs380390), LOC387715/ARMS2 (rs10490924) and HTRA1 (rs11200638) genes using Applied Biosystems optimised TaqMan assays. We determined the critical fusion frequency (CFF) mediated by cones alone (Long, Middle and Short wavelength sensitive cones; LMS) and by the combined activities of cones and rods (LMSR). The stimuli were generated using a 4-primary photostimulator that provides independent control of the photoreceptor excitation under mesopic light levels. Visual function was further assessed using standard clinical tests, flicker perimetry and microperimetry.---------- RESULTS: The mesopic CFF mediated by rods and cones (LMSR) was significantly reduced in gene-positive compared to gene-negative participants after correction for age (p=0.03). Cone-mediated CFF (LMS) was not significantly different between gene-positive and -negative participants. There were no significant associations between flicker perimetry and microperimetry and genotype.---------- CONCLUSIONS: This is the first study to relate ARM risk genotypes with mesopic visual function in clinically normal persons. These preliminary results could become of clinical importance as mesopic vision may be used to document sub-clinical retinal changes in persons with risk genotypes and to determine whether those persons progress into manifest disease.
Resumo:
Because of the greenhouse gas emissions implications of the market dominating electric hot water systems, governments in Australia have implemented policies and programs to encourage the uptake of solar water heaters (SWHs) in the residential market as part of climate change adaptation and mitigation strategies. The cost-benefit analysis that usually accompanies all government policy and program design could be simplistically reduced to the ratio of expected greenhouse gas reductions of SWH to the cost of a SWH. The national Register of Solar Water Heaters specifies how many renewable energy certificates (RECs) are allocated to complying SWHs according to their expected performance, and hence greenhouse gas reductions, in different climates. Neither REC allocations nor rebates are tied to actual performance of systems. This paper examines the performance of instantaneous gas-boosted solar water heaters installed in new residences in a housing estate in south-east Queensland in the period 2007 – 2010. The evidence indicates systemic failures in installation practices, resulting in zero solar performance or dramatic underperformance (estimated average 43% solar contribution). The paper will detail the faults identified, and how these faults were eventually diagnosed and corrected. The impacts of these system failures on end-use consumers are discussed before concluding with a brief overview of areas where further research is required in order to more fully understand whole of supply chain implications.
Resumo:
The theory of nonlinear dyamic systems provides some new methods to handle complex systems. Chaos theory offers new concepts, algorithms and methods for processing, enhancing and analyzing the measured signals. In recent years, researchers are applying the concepts from this theory to bio-signal analysis. In this work, the complex dynamics of the bio-signals such as electrocardiogram (ECG) and electroencephalogram (EEG) are analyzed using the tools of nonlinear systems theory. In the modern industrialized countries every year several hundred thousands of people die due to sudden cardiac death. The Electrocardiogram (ECG) is an important biosignal representing the sum total of millions of cardiac cell depolarization potentials. It contains important insight into the state of health and nature of the disease afflicting the heart. Heart rate variability (HRV) refers to the regulation of the sinoatrial node, the natural pacemaker of the heart by the sympathetic and parasympathetic branches of the autonomic nervous system. Heart rate variability analysis is an important tool to observe the heart's ability to respond to normal regulatory impulses that affect its rhythm. A computerbased intelligent system for analysis of cardiac states is very useful in diagnostics and disease management. Like many bio-signals, HRV signals are non-linear in nature. Higher order spectral analysis (HOS) is known to be a good tool for the analysis of non-linear systems and provides good noise immunity. In this work, we studied the HOS of the HRV signals of normal heartbeat and four classes of arrhythmia. This thesis presents some general characteristics for each of these classes of HRV signals in the bispectrum and bicoherence plots. Several features were extracted from the HOS and subjected an Analysis of Variance (ANOVA) test. The results are very promising for cardiac arrhythmia classification with a number of features yielding a p-value < 0.02 in the ANOVA test. An automated intelligent system for the identification of cardiac health is very useful in healthcare technology. In this work, seven features were extracted from the heart rate signals using HOS and fed to a support vector machine (SVM) for classification. The performance evaluation protocol in this thesis uses 330 subjects consisting of five different kinds of cardiac disease conditions. The classifier achieved a sensitivity of 90% and a specificity of 89%. This system is ready to run on larger data sets. In EEG analysis, the search for hidden information for identification of seizures has a long history. Epilepsy is a pathological condition characterized by spontaneous and unforeseeable occurrence of seizures, during which the perception or behavior of patients is disturbed. An automatic early detection of the seizure onsets would help the patients and observers to take appropriate precautions. Various methods have been proposed to predict the onset of seizures based on EEG recordings. The use of nonlinear features motivated by the higher order spectra (HOS) has been reported to be a promising approach to differentiate between normal, background (pre-ictal) and epileptic EEG signals. In this work, these features are used to train both a Gaussian mixture model (GMM) classifier and a Support Vector Machine (SVM) classifier. Results show that the classifiers were able to achieve 93.11% and 92.67% classification accuracy, respectively, with selected HOS based features. About 2 hours of EEG recordings from 10 patients were used in this study. This thesis introduces unique bispectrum and bicoherence plots for various cardiac conditions and for normal, background and epileptic EEG signals. These plots reveal distinct patterns. The patterns are useful for visual interpretation by those without a deep understanding of spectral analysis such as medical practitioners. It includes original contributions in extracting features from HRV and EEG signals using HOS and entropy, in analyzing the statistical properties of such features on real data and in automated classification using these features with GMM and SVM classifiers.
Resumo:
The 1990 European Community was taken by surprise, by the urgency of demands from the newly-elected Eastern European governments to become member countries. Those governments were honouring the mass social movement of the streets, the year before, demanding free elections and a liberal economic system associated with “Europe”. The mass movement had actually been accompanied by much activity within institutional politics, in Western Europe, the former “satellite” states, the Soviet Union and the United States, to set up new structures – with German reunification and an expanded EC as the centre-piece. This paper draws on the writer’s doctoral dissertation on mass media in the collapse of the Eastern bloc, focused on the Berlin Wall – documenting both public protests and institutional negotiations. For example the writer as a correspondent in Europe from that time, recounts interventions of the German Chancellor, Helmut Kohl, at a European summit in Paris nine days after the “Wall”, and separate negotiations with the French President, Francois Mitterrand -- on the reunification, and EU monetary union after 1992. Through such processes, the “European idea” would receive fresh impetus, though the EU which eventuated, came with many altered expectations. It is argued here that as a result of the shock of 1989, a “social” Europe can be seen emerging, as a shared experience of daily life -- especially among people born during the last two decades of European consolidation. The paper draws on the author’s major research, in four parts: (1) Field observation from the strategic vantage point of a news correspondent. This includes a treatment of evidence at the time, of the wishes and intentions of the mass public (including the unexpected drive to join the European Community), and those of governments, (e.g. thoughts of a “Tienanmen Square solution” in East Berlin, versus the non-intervention policies of the Soviet leader, Mikhail Gorbachev). (2) A review of coverage of the crisis of 1989 by major news media outlets, treated as a history of the process. (3) As a comparison, and a test of accuracy and analysis; a review of conventional histories of the crisis appearing a decade later.(4) A further review, and test, provided by journalists responsible for the coverage of the time, as reflection on practice – obtained from semi-structured interviews.
Resumo:
Since its initial proposal in 1998, alkaline hydrothermal processing has rapidly become an established technology for the production of titanate nanostructures. This simple, highly reproducible process has gained a strong research following since its conception. However, complete understanding and elucidation of nanostructure phase and formation have not yet been achieved. Without fully understanding phase, formation, and other important competing effects of the synthesis parameters on the final structure, the maximum potential of these nanostructures cannot be obtained. Therefore this study examined the influence of synthesis parameters on the formation of titanate nanostructures produced by alkaline hydrothermal treatment. The parameters included alkaline concentration, hydrothermal temperature, the precursor material‘s crystallite size and also the phase of the titanium dioxide precursor (TiO2, or titania). The nanostructure‘s phase and morphology was analysed using X-ray diffraction (XRD), Raman spectroscopy and transmission electron microscopy. X-ray photoelectron spectroscopy (XPS), dynamic light scattering (non-invasive backscattering), nitrogen sorption, and Rietveld analysis were used to determine phase, for particle sizing, surface area determinations, and establishing phase concentrations, respectively. This project rigorously examined the effect of alkaline concentration and hydrothermal temperature on three commercially sourced and two self-prepared TiO2 powders. These precursors consisted of both pure- or mixed-phase anatase and rutile polymorphs, and were selected to cover a range of phase concentrations and crystallite sizes. Typically, these precursors were treated with 5–10 M sodium hydroxide (NaOH) solutions at temperatures between 100–220 °C. Both nanotube and nanoribbon morphologies could be produced depending on the combination of these hydrothermal conditions. Both titania and titanate phases are comprised of TiO6 units which are assembled in different combinations. The arrangement of these atoms affects the binding energy between the Ti–O bonds. Raman spectroscopy and XPS were therefore employed in a preliminary study of phase determination for these materials. The change in binding energy from a titania to a titanate binding energy was investigated in this study, and the transformation of titania precursor into nanotubes and titanate nanoribbons was directly observed by these methods. Evaluation of the Raman and XPS results indicated a strengthening in the binding energies of both the Ti (2p3/2) and O (1s) bands which correlated to an increase in strength and decrease in resolution of the characteristic nanotube doublet observed between 320 and 220 cm.1 in the Raman spectra of these products. The effect of phase and crystallite size on nanotube formation was examined over a series of temperatures (100.200 �‹C in 20 �‹C increments) at a set alkaline concentration (7.5 M NaOH). These parameters were investigated by employing both pure- and mixed- phase precursors of anatase and rutile. This study indicated that both the crystallite size and phase affect nanotube formation, with rutile requiring a greater driving force (essentially �\harsher. hydrothermal conditions) than anatase to form nanotubes, where larger crystallites forms of the precursor also appeared to impede nanotube formation slightly. These parameters were further examined in later studies. The influence of alkaline concentration and hydrothermal temperature were systematically examined for the transformation of Degussa P25 into nanotubes and nanoribbons, and exact conditions for nanostructure synthesis were determined. Correlation of these data sets resulted in the construction of a morphological phase diagram, which is an effective reference for nanostructure formation. This morphological phase diagram effectively provides a .recipe book�e for the formation of titanate nanostructures. Morphological phase diagrams were also constructed for larger, near phase-pure anatase and rutile precursors, to further investigate the influence of hydrothermal reaction parameters on the formation of titanate nanotubes and nanoribbons. The effects of alkaline concentration, hydrothermal temperature, crystallite phase and size are observed when the three morphological phase diagrams are compared. Through the analysis of these results it was determined that alkaline concentration and hydrothermal temperature affect nanotube and nanoribbon formation independently through a complex relationship, where nanotubes are primarily affected by temperature, whilst nanoribbons are strongly influenced by alkaline concentration. Crystallite size and phase also affected the nanostructure formation. Smaller precursor crystallites formed nanostructures at reduced hydrothermal temperature, and rutile displayed a slower rate of precursor consumption compared to anatase, with incomplete conversion observed for most hydrothermal conditions. The incomplete conversion of rutile into nanotubes was examined in detail in the final study. This study selectively examined the kinetics of precursor dissolution in order to understand why rutile incompletely converted. This was achieved by selecting a single hydrothermal condition (9 M NaOH, 160 °C) where nanotubes are known to form from both anatase and rutile, where the synthesis was quenched after 2, 4, 8, 16 and 32 hours. The influence of precursor phase on nanostructure formation was explicitly determined to be due to different dissolution kinetics; where anatase exhibited zero-order dissolution and rutile second-order. This difference in kinetic order cannot be simply explained by the variation in crystallite size, as the inherent surface areas of the two precursors were determined to have first-order relationships with time. Therefore, the crystallite size (and inherent surface area) does not affect the overall kinetic order of dissolution; rather, it determines the rate of reaction. Finally, nanostructure formation was found to be controlled by the availability of dissolved titanium (Ti4+) species in solution, which is mediated by the dissolution kinetics of the precursor.
Resumo:
The human knee acts as a sophisticated shock absorber during landing movements. The ability of the knee to perform this function in the real world is remarkable given that the context of the landing movement may vary widely between performances. For this reason, humans must be capable of rapidly adjusting the mechanical properties of the knee under impact load in order to satisfy many competing demands. However, the processes involved in regulating these properties in response to changing constraints remain poorly understood. In particular, the effects of muscle fatigue on knee function during step landing are yet to be fully explored. Fatigue of the knee muscles is significant for 2 reasons. First, it is thought to have detrimental effects on the ability of the knee to act as a shock absorber and is considered a risk factor for knee injury. Second, fatigue of knee muscles provides a unique opportunity to examine the mechanisms by which healthy individuals alter knee function. A review of the literature revealed that the effect of fatigue on knee function during landing has been assessed by comparing pre and postfatigue measurements, with fatigue induced by a voluntary exercise protocol. The information is limited by inconsistent results with key measures, such as knee stiffness, showing varying results following fatigue, including increased stiffness, decreased stiffness or failure to detect any change in some experiments. Further consideration of the literature questions the validity of the models used to induce and measure fatigue, as well as the pre-post study design, which may explain the lack of consensus in the results. These limitations cast doubt on the usefulness of the available information and identify a need to investigate alternative approaches. Based on the results of this review, the aims of this thesis were to: • evaluate the methodological procedures used in validation of a fatigue model • investigate the adaptation and regulation of post-impact knee mechanics during repeated step landings • use this new information to test the effects of fatigue on knee function during a step-landing task. To address the aims of the thesis, 3 related experiments were conducted that collected kinetic, kinematic and electromyographic data from 3 separate samples of healthy male participants. The methodologies involved optoelectronic motion capture (VICON), isokinetic dynamometry (System3 Pro, BIODEX) and wireless surface electromyography (Zerowire, Aurion, Italy). Fatigue indicators and knee function measures used in each experiment were derived from the data. Study 1 compared the validity and reliability of repetitive stepping and isokinetic contractions with respect to fatigue of the quadriceps and hamstrings. Fifteen participants performed 50 repetitions of each exercise twice in randomised order, over 4 sessions. Sessions were separated by a minimum of 1 week’s rest, to ensure full recovery. Validity and reliability depended on a complex interaction between the exercise protocol, the fatigue indicator, the individual and the muscle of interest. Nevertheless, differences between exercise protocols indicated that stepping was less effective in eliciting valid and reliable changes in peak power and spectral compression, compared with isokinetic exercise. A key finding was that fatigue progressed in a biphasic pattern during both exercises. The point separating the 2 phases, known as the transition point, demonstrated superior between-test reliability during the isokinetic protocol, compared with stepping. However, a correction factor should be used to accurately apply this technique to the study of fatigue during landing. Study 2 examined alterations in knee function during repeated landings, with a different sample (N =12) performing 60 consecutive step landing trials. Each landing trial was separated by 1-minute rest periods. The results provided new information in relation to the pre-post study design in the context of detecting adjustments in knee function during landing. First, participants significantly increased or decreased pre-impact muscle activity or post-impact mechanics despite environmental and task constraints remaining unchanged. This is the 1st study to demonstrate this effect in healthy individuals without external feedback on performance. Second, single-subject analysis was more effective in detecting alterations in knee function compared to group-level analysis. Finally, repeated landing trials did not reduce inter-trial variability of knee function in some participants, contrary to assumptions underpinning previous studies. The results of studies 1 and 2 were used to modify the design of Study 3 relative to previous research. These alterations included a modified isokinetic fatigue protocol, multiple pre-fatigue measurements and singlesubject analysis to detect fatigue-related changes in knee function. The study design incorporated new analytical approaches to investigate fatiguerelated alterations in knee function during landing. Participants (N = 16) were measured during multiple pre-fatigue baseline trial blocks prior to the fatigue model. A final block of landing trials was recorded once the participant met the operational fatigue definition that was identified in Study 1. The analysis revealed that the effects of fatigue in this context are heavily dependent on the compensatory response of the individual. A continuum of responses was observed within the sample for each knee function measure. Overall, preimpact preparation and post-impact mechanics of the knee were altered with highly individualised patterns. Moreover, participants used a range of active or passive pre-impact strategies to adapt post-impact mechanics in response to quadriceps fatigue. The unique patterns identified in the data represented an optimisation of knee function based on priorities of the individual. The findings of these studies explain the lack of consensus within the literature regarding the effects of fatigue on knee function during landing. First, functional fatigue protocols lack validity in inducing fatigue-related changes in mechanical output and spectral compression of surface electromyography (sEMG) signals, compared with isokinetic exercise. Second, fatigue-related changes in knee function during landing are confounded by inter-individual variation, which limits the sensitivity of group-level analysis. By addressing these limitations, the 3rd study demonstrated the efficacies of new experimental and analytical approaches to observe fatigue-related alterations in knee function during landing. Consequently, this thesis provides new perspectives into the effects of fatigue in knee function during landing. In conclusion: • The effects of fatigue on knee function during landing depend on the response of the individual, with considerable variation present between study participants, despite similar physical characteristics. • In healthy males, adaptation of pre-impact muscle activity and postimpact knee mechanics is unique to the individual and reflects their own optimisation of demands such as energy expenditure, joint stability, sensory information and loading of knee structures. • The results of these studies should guide future exploration of adaptations in knee function to fatigue. However, research in this area should continue with reduced emphasis on the directional response of the population and a greater focus on individual adaptations of knee function.
Resumo:
Impedance cardiography is an application of bioimpedance analysis primarily used in a research setting to determine cardiac output. It is a non invasive technique that measures the change in the impedance of the thorax which is attributed to the ejection of a volume of blood from the heart. The cardiac output is calculated from the measured impedance using the parallel conductor theory and a constant value for the resistivity of blood. However, the resistivity of blood has been shown to be velocity dependent due to changes in the orientation of red blood cells induced by changing shear forces during flow. The overall goal of this thesis was to study the effect that flow deviations have on the electrical impedance of blood, both experimentally and theoretically, and to apply the results to a clinical setting. The resistivity of stationary blood is isotropic as the red blood cells are randomly orientated due to Brownian motion. In the case of blood flowing through rigid tubes, the resistivity is anisotropic due to the biconcave discoidal shape and orientation of the cells. The generation of shear forces across the width of the tube during flow causes the cells to align with the minimal cross sectional area facing the direction of flow. This is in order to minimise the shear stress experienced by the cells. This in turn results in a larger cross sectional area of plasma and a reduction in the resistivity of the blood as the flow increases. Understanding the contribution of this effect on the thoracic impedance change is a vital step in achieving clinical acceptance of impedance cardiography. Published literature investigates the resistivity variations for constant blood flow. In this case, the shear forces are constant and the impedance remains constant during flow at a magnitude which is less than that for stationary blood. The research presented in this thesis, however, investigates the variations in resistivity of blood during pulsataile flow through rigid tubes and the relationship between impedance, velocity and acceleration. Using rigid tubes isolates the impedance change to variations associated with changes in cell orientation only. The implications of red blood cell orientation changes for clinical impedance cardiography were also explored. This was achieved through measurement and analysis of the experimental impedance of pulsatile blood flowing through rigid tubes in a mock circulatory system. A novel theoretical model including cell orientation dynamics was developed for the impedance of pulsatile blood through rigid tubes. The impedance of flowing blood was theoretically calculated using analytical methods for flow through straight tubes and the numerical Lattice Boltzmann method for flow through complex geometries such as aortic valve stenosis. The result of the analytical theoretical model was compared to the experimental impedance measurements through rigid tubes. The impedance calculated for flow through a stenosis using the Lattice Boltzmann method provides results for comparison with impedance cardiography measurements collected as part of a pilot clinical trial to assess the suitability of using bioimpedance techniques to assess the presence of aortic stenosis. The experimental and theoretical impedance of blood was shown to inversely follow the blood velocity during pulsatile flow with a correlation of -0.72 and -0.74 respectively. The results for both the experimental and theoretical investigations demonstrate that the acceleration of the blood is an important factor in determining the impedance, in addition to the velocity. During acceleration, the relationship between impedance and velocity is linear (r2 = 0.98, experimental and r2 = 0.94, theoretical). The relationship between the impedance and velocity during the deceleration phase is characterised by a time decay constant, ô , ranging from 10 to 50 s. The high level of agreement between the experimental and theoretically modelled impedance demonstrates the accuracy of the model developed here. An increase in the haematocrit of the blood resulted in an increase in the magnitude of the impedance change due to changes in the orientation of red blood cells. The time decay constant was shown to decrease linearly with the haematocrit for both experimental and theoretical results, although the slope of this decrease was larger in the experimental case. The radius of the tube influences the experimental and theoretical impedance given the same velocity of flow. However, when the velocity was divided by the radius of the tube (labelled the reduced average velocity) the impedance response was the same for two experimental tubes with equivalent reduced average velocity but with different radii. The temperature of the blood was also shown to affect the impedance with the impedance decreasing as the temperature increased. These results are the first published for the impedance of pulsatile blood. The experimental impedance change measured orthogonal to the direction of flow is in the opposite direction to that measured in the direction of flow. These results indicate that the impedance of blood flowing through rigid cylindrical tubes is axisymmetric along the radius. This has not previously been verified experimentally. Time frequency analysis of the experimental results demonstrated that the measured impedance contains the same frequency components occuring at the same time point in the cycle as the velocity signal contains. This suggests that the impedance contains many of the fluctuations of the velocity signal. Application of a theoretical steady flow model to pulsatile flow presented here has verified that the steady flow model is not adequate in calculating the impedance of pulsatile blood flow. The success of the new theoretical model over the steady flow model demonstrates that the velocity profile is important in determining the impedance of pulsatile blood. The clinical application of the impedance of blood flow through a stenosis was theoretically modelled using the Lattice Boltzman method (LBM) for fluid flow through complex geometeries. The impedance of blood exiting a narrow orifice was calculated for varying degrees of stenosis. Clincial impedance cardiography measurements were also recorded for both aortic valvular stenosis patients (n = 4) and control subjects (n = 4) with structurally normal hearts. This pilot trial was used to corroborate the results of the LBM. Results from both investigations showed that the decay time constant for impedance has potential in the assessment of aortic valve stenosis. In the theoretically modelled case (LBM results), the decay time constant increased with an increase in the degree of stenosis. The clinical results also showed a statistically significant difference in time decay constant between control and test subjects (P = 0.03). The time decay constant calculated for test subjects (ô = 180 - 250 s) is consistently larger than that determined for control subjects (ô = 50 - 130 s). This difference is thought to be due to difference in the orientation response of the cells as blood flows through the stenosis. Such a non-invasive technique using the time decay constant for screening of aortic stenosis provides additional information to that currently given by impedance cardiography techniques and improves the value of the device to practitioners. However, the results still need to be verified in a larger study. While impedance cardiography has not been widely adopted clinically, it is research such as this that will enable future acceptance of the method.
Resumo:
The significant challenge faced by government in demonstrating value for money in the delivery of major infrastructure resolves around estimating costs and benefits of alternative modes of procurement. Faced with this challenge, one approach is to focus on a dominant performance outcome visible on the opening day of the asset, as the means to select the procurement approach. In this case, value for money becomes a largely nominal concept and determined by selected procurement mode delivering, or not delivering, the selected performance outcome, and notwithstanding possible under delivery on other desirable performance outcomes, as well as possibly incurring excessive transaction costs. This paper proposes a mind-set change in this particular practice, to an approach in which the analysis commences with the conditions pertaining to the project and proceeds to deploy transaction cost and production cost theory to indicate a procurement approach that can claim superior value for money relative to other competing procurement modes. This approach to delivering value for money in relative terms is developed in a first-order procurement decision making model outlined in this paper. The model developed could be complementary to the Public Sector Comparator (PSC) in terms of cross validation and the model more readily lends itself to public dissemination. As a possible alternative to the PSC, the model could save time and money in preparation of project details to lesser extent than that required in the reference project and may send a stronger signal to the market that may encourage more innovation and competition.
Resumo:
The structures of two polymorphs of the anhydrous cocrystal adduct of bis(quinolinium-2-carboxylate) DL-malic acid, one triclinic the other monoclinic and disordered, have been determined at 200 K. Crystals of the triclinic polymorph 1 have space group P-1, with Z = 1 in a cell with dimensions a = 4.4854(4), b = 9.8914(7), c = 12.4670(8)Å, α = 79.671(5), β = 83.094(6), γ = 88.745(6)deg. Crystals of the monoclinic polymorph 2 have space group P21/c, with Z = 2 in a cell with dimensions a = 13.3640(4), b = 4.4237(12), c = 18.4182(5)Å, β = 100.782(3)deg. Both structures comprise centrosymmetric cyclic hydrogen-bonded quinolinic acid zwitterion dimers [graph set R2/2(10)] and 50% disordered malic acid molecules which lie across crystallographic inversion centres. However, the oxygen atoms of the malic acid carboxylic groups in 2 are 50% rotationally disordered whereas in 1 these are ordered. There are similar primary malic acid carboxyl O-H...quinaldic acid hydrogen-bonding chain interactions in each polymorph, extended into two-dimensional structures but in l this involves centrosymmetric cyclic head-to-head malic acid hydroxyl-carboxyl O-H...O interactions [graph set R2/2(10)] whereas in 2 the links are through single hydroxy-carboxyl hydrogen bonds.
Resumo:
The traditional searching method for model-order selection in linear regression is a nested full-parameters-set searching procedure over the desired orders, which we call full-model order selection. On the other hand, a method for model-selection searches for the best sub-model within each order. In this paper, we propose using the model-selection searching method for model-order selection, which we call partial-model order selection. We show by simulations that the proposed searching method gives better accuracies than the traditional one, especially for low signal-to-noise ratios over a wide range of model-order selection criteria (both information theoretic based and bootstrap-based). Also, we show that for some models the performance of the bootstrap-based criterion improves significantly by using the proposed partial-model selection searching method. Index Terms— Model order estimation, model selection, information theoretic criteria, bootstrap 1. INTRODUCTION Several model-order selection criteria can be applied to find the optimal order. Some of the more commonly used information theoretic-based procedures include Akaike’s information criterion (AIC) [1], corrected Akaike (AICc) [2], minimum description length (MDL) [3], normalized maximum likelihood (NML) [4], Hannan-Quinn criterion (HQC) [5], conditional model-order estimation (CME) [6], and the efficient detection criterion (EDC) [7]. From a practical point of view, it is difficult to decide which model order selection criterion to use. Many of them perform reasonably well when the signal-to-noise ratio (SNR) is high. The discrepancies in their performance, however, become more evident when the SNR is low. In those situations, the performance of the given technique is not only determined by the model structure (say a polynomial trend versus a Fourier series) but, more importantly, by the relative values of the parameters within the model. This makes the comparison between the model-order selection algorithms difficult as within the same model with a given order one could find an example for which one of the methods performs favourably well or fails [6, 8]. Our aim is to improve the performance of the model order selection criteria in cases where the SNR is low by considering a model-selection searching procedure that takes into account not only the full-model order search but also a partial model order search within the given model order. Understandably, the improvement in the performance of the model order estimation is at the expense of additional computational complexity.
Resumo:
Corneal-height data are typically measured with videokeratoscopes and modeled using a set of orthogonal Zernike polynomials. We address the estimation of the number of Zernike polynomials, which is formalized as a model-order selection problem in linear regression. Classical information-theoretic criteria tend to overestimate the corneal surface due to the weakness of their penalty functions, while bootstrap-based techniques tend to underestimate the surface or require extensive processing. In this paper, we propose to use the efficient detection criterion (EDC), which has the same general form of information-theoretic-based criteria, as an alternative to estimating the optimal number of Zernike polynomials. We first show, via simulations, that the EDC outperforms a large number of information-theoretic criteria and resampling-based techniques. We then illustrate that using the EDC for real corneas results in models that are in closer agreement with clinical expectations and provides means for distinguishing normal corneal surfaces from astigmatic and keratoconic surfaces.
Resumo:
Objective: To understand the levels of substance abuse and dependence among impaired drivers by comparing the differences in patients in substance abuse treatment programs with and without a past-year DUI arrest based on their primary problem substance at admission (alcohol, cocaine, cannabis, or methamphetamine). Method: Records on 345,067 admissions to Texas treatment programs between 2005 and 2008 have been analyzed for differences in demographic characteristics, levels of severity, and mental health problems at admission, treatment completion, and 90-day follow-up. Methods will include t-tests,??, and multivariate logistic regression. Results: The analysis found that DUI arrestees with a primary problem with alcohol were less impaired than non-DUI alcohol patients, had fewer mental health problems, and were more likely to complete treatment. DUI arrestees with a primary problem with cannabis were more impaired than non-DUI cannabis patients and there was no difference in treatment completion. DUI arrestees with a primary problem with cocaine were less impaired and more likely to complete treatment than other cocaine patients, and there was little difference in levels of mental health problems. DUI arrestees with a primary problem with methamphetamine were more similar to methamphetamine non-arrestees, with no difference in mental health problems and treatment completion. Conclusions: This study provides evidence of the extent of abuse and dependence among DUI arrestees and their need for treatment for their alcohol and drug problems in order to decrease recidivism. Treatment patients with past-year DUI arrests had good treatment outcomes but closer supervision during 90 day follow-up after treatment can lead to even better long-term outcomes, including reduced recidivism. Information will be provided on the latest treatment methodologies, including medication assisted therapies and screening and brief interventions, and ways impaired driving programs and substance dependence programs can be integrated to benefit the driver and society.
Resumo:
For many decades correlation and power spectrum have been primary tools for digital signal processing applications in the biomedical area. The information contained in the power spectrum is essentially that of the autocorrelation sequence; which is sufficient for complete statistical descriptions of Gaussian signals of known means. However, there are practical situations where one needs to look beyond autocorrelation of a signal to extract information regarding deviation from Gaussianity and the presence of phase relations. Higher order spectra, also known as polyspectra, are spectral representations of higher order statistics, i.e. moments and cumulants of third order and beyond. HOS (higher order statistics or higher order spectra) can detect deviations from linearity, stationarity or Gaussianity in the signal. Most of the biomedical signals are non-linear, non-stationary and non-Gaussian in nature and therefore it can be more advantageous to analyze them with HOS compared to the use of second order correlations and power spectra. In this paper we have discussed the application of HOS for different bio-signals. HOS methods of analysis are explained using a typical heart rate variability (HRV) signal and applications to other signals are reviewed.