65 resultados para Nonmesonic decay


Relevância:

10.00% 10.00%

Publicador:

Resumo:

Impedance cardiography is an application of bioimpedance analysis primarily used in a research setting to determine cardiac output. It is a non invasive technique that measures the change in the impedance of the thorax which is attributed to the ejection of a volume of blood from the heart. The cardiac output is calculated from the measured impedance using the parallel conductor theory and a constant value for the resistivity of blood. However, the resistivity of blood has been shown to be velocity dependent due to changes in the orientation of red blood cells induced by changing shear forces during flow. The overall goal of this thesis was to study the effect that flow deviations have on the electrical impedance of blood, both experimentally and theoretically, and to apply the results to a clinical setting. The resistivity of stationary blood is isotropic as the red blood cells are randomly orientated due to Brownian motion. In the case of blood flowing through rigid tubes, the resistivity is anisotropic due to the biconcave discoidal shape and orientation of the cells. The generation of shear forces across the width of the tube during flow causes the cells to align with the minimal cross sectional area facing the direction of flow. This is in order to minimise the shear stress experienced by the cells. This in turn results in a larger cross sectional area of plasma and a reduction in the resistivity of the blood as the flow increases. Understanding the contribution of this effect on the thoracic impedance change is a vital step in achieving clinical acceptance of impedance cardiography. Published literature investigates the resistivity variations for constant blood flow. In this case, the shear forces are constant and the impedance remains constant during flow at a magnitude which is less than that for stationary blood. The research presented in this thesis, however, investigates the variations in resistivity of blood during pulsataile flow through rigid tubes and the relationship between impedance, velocity and acceleration. Using rigid tubes isolates the impedance change to variations associated with changes in cell orientation only. The implications of red blood cell orientation changes for clinical impedance cardiography were also explored. This was achieved through measurement and analysis of the experimental impedance of pulsatile blood flowing through rigid tubes in a mock circulatory system. A novel theoretical model including cell orientation dynamics was developed for the impedance of pulsatile blood through rigid tubes. The impedance of flowing blood was theoretically calculated using analytical methods for flow through straight tubes and the numerical Lattice Boltzmann method for flow through complex geometries such as aortic valve stenosis. The result of the analytical theoretical model was compared to the experimental impedance measurements through rigid tubes. The impedance calculated for flow through a stenosis using the Lattice Boltzmann method provides results for comparison with impedance cardiography measurements collected as part of a pilot clinical trial to assess the suitability of using bioimpedance techniques to assess the presence of aortic stenosis. The experimental and theoretical impedance of blood was shown to inversely follow the blood velocity during pulsatile flow with a correlation of -0.72 and -0.74 respectively. The results for both the experimental and theoretical investigations demonstrate that the acceleration of the blood is an important factor in determining the impedance, in addition to the velocity. During acceleration, the relationship between impedance and velocity is linear (r2 = 0.98, experimental and r2 = 0.94, theoretical). The relationship between the impedance and velocity during the deceleration phase is characterised by a time decay constant, ô , ranging from 10 to 50 s. The high level of agreement between the experimental and theoretically modelled impedance demonstrates the accuracy of the model developed here. An increase in the haematocrit of the blood resulted in an increase in the magnitude of the impedance change due to changes in the orientation of red blood cells. The time decay constant was shown to decrease linearly with the haematocrit for both experimental and theoretical results, although the slope of this decrease was larger in the experimental case. The radius of the tube influences the experimental and theoretical impedance given the same velocity of flow. However, when the velocity was divided by the radius of the tube (labelled the reduced average velocity) the impedance response was the same for two experimental tubes with equivalent reduced average velocity but with different radii. The temperature of the blood was also shown to affect the impedance with the impedance decreasing as the temperature increased. These results are the first published for the impedance of pulsatile blood. The experimental impedance change measured orthogonal to the direction of flow is in the opposite direction to that measured in the direction of flow. These results indicate that the impedance of blood flowing through rigid cylindrical tubes is axisymmetric along the radius. This has not previously been verified experimentally. Time frequency analysis of the experimental results demonstrated that the measured impedance contains the same frequency components occuring at the same time point in the cycle as the velocity signal contains. This suggests that the impedance contains many of the fluctuations of the velocity signal. Application of a theoretical steady flow model to pulsatile flow presented here has verified that the steady flow model is not adequate in calculating the impedance of pulsatile blood flow. The success of the new theoretical model over the steady flow model demonstrates that the velocity profile is important in determining the impedance of pulsatile blood. The clinical application of the impedance of blood flow through a stenosis was theoretically modelled using the Lattice Boltzman method (LBM) for fluid flow through complex geometeries. The impedance of blood exiting a narrow orifice was calculated for varying degrees of stenosis. Clincial impedance cardiography measurements were also recorded for both aortic valvular stenosis patients (n = 4) and control subjects (n = 4) with structurally normal hearts. This pilot trial was used to corroborate the results of the LBM. Results from both investigations showed that the decay time constant for impedance has potential in the assessment of aortic valve stenosis. In the theoretically modelled case (LBM results), the decay time constant increased with an increase in the degree of stenosis. The clinical results also showed a statistically significant difference in time decay constant between control and test subjects (P = 0.03). The time decay constant calculated for test subjects (ô = 180 - 250 s) is consistently larger than that determined for control subjects (ô = 50 - 130 s). This difference is thought to be due to difference in the orientation response of the cells as blood flows through the stenosis. Such a non-invasive technique using the time decay constant for screening of aortic stenosis provides additional information to that currently given by impedance cardiography techniques and improves the value of the device to practitioners. However, the results still need to be verified in a larger study. While impedance cardiography has not been widely adopted clinically, it is research such as this that will enable future acceptance of the method.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Two-stroke outboard boat engines using total loss lubrication deposit a significant proportion of their lubricant and fuel directly into the water. The purpose of this work is to document the velocity and concentration field characteristics of a submerged swirling water jet emanating from a propeller in order to provide information on its fundamental characteristics. The properties of the jet were examined far enough downstream to be relevant to the eventual modelling of the mixing problem. Measurements of the velocity and concentration field were performed in a turbulent jet generated by a model boat propeller (0.02 m diameter) operating at 1500 rpm and 3000 rpm in a weak co-flow of 0.04 m/s. The measurements were carried out in the Zone of Established Flow up to 50 propeller diameters downstream of the propeller, which was placed in a glass-walled flume 0.4 m wide with a free surface depth of 0.15 m. The jet and scalar plume development were compared to that of a classical free round jet. Further, results pertaining to radial distribution, self similarity, standard deviation growth, maximum value decay and integral fluxes of velocity and concentration were presented and fitted with empirical correlations. Furthermore, propeller induced mixing and pollutant source concentration from a two-stroke engine were estimated.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Sample complexity results from computational learning theory, when applied to neural network learning for pattern classification problems, suggest that for good generalization performance the number of training examples should grow at least linearly with the number of adjustable parameters in the network. Results in this paper show that if a large neural network is used for a pattern classification problem and the learning algorithm finds a network with small weights that has small squared error on the training patterns, then the generalization performance depends on the size of the weights rather than the number of weights. For example, consider a two-layer feedforward network of sigmoid units, in which the sum of the magnitudes of the weights associated with each unit is bounded by A and the input dimension is n. We show that the misclassification probability is no more than a certain error estimate (that is related to squared error on the training set) plus A3 √((log n)/m) (ignoring log A and log m factors), where m is the number of training patterns. This may explain the generalization performance of neural networks, particularly when the number of training examples is considerably smaller than the number of weights. It also supports heuristics (such as weight decay and early stopping) that attempt to keep the weights small during training. The proof techniques appear to be useful for the analysis of other pattern classifiers: when the input domain is a totally bounded metric space, we use the same approach to give upper bounds on misclassification probability for classifiers with decision boundaries that are far from the training examples.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The performance and electron recombination kinetics of dye-sensitized solar cells based on TiO2 films consisting of one-dimensional nanorod arrays (NR-DSSCs) which are sensitized with dye N719, C218 and D205 respectively have been studied. It has been found that the best efficiency is obtained with the dye C218 based NR-DSSCs, benefiting from a 40% higher short-circuit photocurrent density. However, the open circuit photovoltage of the N719 based cell is 40 mV higher than that of the organic dye C218 and D205 based devices. Investigation of the electron recombination kinetics of the NR-DSSCs has revealed that the effective electron lifetime, τn, of the N719 based NR-DSSC is the lowest whereas the τn of the C218 based NR-DSSC is the highest among the three dyes. The higher Voc with the N719 based NR-DSSC is originated from the more negative energy level of the conduction band of the TiO2 film. In addition, in comparison to the DSSCs with conventional nanocrystalline particles based TiO2 films, the NR-DSSCs have shown over two orders of magnitude higher τn when employing N719 as the sensitizer. Nevertheless, the τn of the DSSCs with the C218 based nanorod arrays is only ten-fold higher than the that of the nanoparticles based devices. The remarkable characteristic of the dye C218 in suppressing the electron recombination of DSSCs is discussed.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This paper examines changing patterns in the utilisation and geographic access to health services in Great Britain using National Travel Survey data (1985-2006). The utilisation rate was derived using the proportion of journeys made to access health services. Geographic access was analysed by separating the concept into its accessibility and mobility dimensions. Regression analyses were conducted to investigate the differences between different socio-spatial groups in these indicators over the period 1985-2006. This study found that journey distances to health facilities were significantly shorter and also gradually reduced over the period in question for Londoners, females, those without a car or on low incomes, and older people. However, most of their rates of utilisation of health services were found to be significantly lower because their journey times were significantly longer and also gradually increased over the periods. These findings indicate that the rate of utilisation of health services largely depends on mobility level although previous research studies have traditionally overlooked the mobility dimension.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Computational fluid dynamics (CFD) models for ultrahigh velocity waterjets and abrasive waterjets (AWJs) are established using the Fluent 6 flow solver. Jet dynamic characteristics for the flow downstream from a very fine nozzle are then simulated under steady state, turbulent, two-phase and three-phase flow conditions. Water and particle velocities in a jet are obtained under different input and boundary conditions to provide an insight into the jet characteristics and a fundamental understanding of the kerf formation process in AWJ cutting. For the range of downstream distances considered, the results indicate that a jet is characterised by an initial rapid decay of the axial velocity at the jet centre while the cross-sectional flow evolves towards a top-hat profile downstream.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Sunnybank represents a distinctly Australian take on the classic ‘Chinatown’ – or indeed other ethic community enclaves such as ‘little Italy’, ‘little Bombay’, ‘little Athens’ and so on. In the Northern Hemisphere these tended to grow up in the dense working class neighbourhoods of industrial cities, especially in port cities like Liverpool, London, New York and San Francisco. The existing Chinatowns of Sydney and Melbourne, and to some extent Brisbane’s Fortitude Valley, are of this variety. In the late 1970s, with the growth of suburbanisation and the de-industrialisation and consequent dereliction of the ‘inner city’, these ethnic communities were one of the few signs of life in the city. Apart from the daily commute into the CBD, business with the city council or a trip to the big shopping streets these areas were one of the few reasons for visiting city centres stigmatised by urban decay and petty crime.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Contamination of packaged foods due to micro-organisms entering through air leaks can cause serious public health issues and cost companies large amounts of money due to product recalls, consumer impact and subsequent loss of market share. The main source of contamination is leaks in packaging which allow air, moisture and microorganisms to enter the package. In the food processing and packaging industry worldwide, there is an increasing demand for cost effective state of the art inspection technologies that are capable of reliably detecting leaky seals and delivering products at six-sigma. The new technology will develop non-destructive testing technology using digital imaging and sensing combined with a differential vacuum technique to assess seal integrity of food packages on a high-speed production line. The cost of leaky packages in Australian food industries is estimated close to AUD $35 Million per year. Contamination of packaged foods due to micro-organisms entering through air leaks can cause serious public health issues and cost companies large sums of money due to product recalls, compensation claims and loss of market share. The main source of contamination is leaks in packaging which allow air, moisture and micro-organisms to enter the package. Flexible plastic packages are widely used, and are the least expensive form of retaining the quality of the product. These packets can be used to seal, and therefore maximise, the shelf life of both dry and moist products. The seals of food packages need to be airtight so that the food content is not contaminated due to contact with microorganisms that enter as a result of air leakage. Airtight seals also extend the shelf life of packaged foods, and manufacturers attempt to prevent food products with leaky seals being sold to consumers. There are many current NDT (non-destructive testing) methods of testing the seal of flexible packages best suited to random sampling, and for laboratory purposes. The three most commonly used methods are vacuum/pressure decay, bubble test, and helium leak detection. Although these methods can detect very fine leaks, they are limited by their high processing time and are not viable in a production line. Two nondestructive in-line packaging inspection machines are currently available and are discussed in the literature review. The detailed design and development of the High-Speed Sensing and Detection System (HSDS) is the fundamental requirement of this project and the future prototype and production unit. Successful laboratory testing was completed and a methodical design procedure was needed for a successful concept. The Mechanical tests confirmed the vacuum hypothesis and seal integrity with good consistent results. Electrically, the testing also provided solid results to enable the researcher to move the project forward with a certain amount of confidence. The laboratory design testing allowed the researcher to confirm theoretical assumptions before moving into the detailed design phase. Discussion on the development of the alternative concepts in both mechanical and electrical disciplines enables the researcher to make an informed decision. Each major mechanical and electrical component is detailed through the research and design process. The design procedure methodically works through the various major functions both from a mechanical and electrical perspective. It opens up alternative ideas for the major components that although are sometimes not practical in this application, show that the researcher has exhausted all engineering and functionality thoughts. Further concepts were then designed and developed for the entire HSDS unit based on previous practice and theory. In the future, it would be envisaged that both the Prototype and Production version of the HSDS would utilise standard industry available components, manufactured and distributed locally. Future research and testing of the prototype unit could result in a successful trial unit being incorporated in a working food processing production environment. Recommendations and future works are discussed, along with options in other food processing and packaging disciplines, and other areas in the non-food processing industry.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

ZnO nanowires are normally exposed to an oxygen atmosphere to achieve high performance in UV photodetection. In this work we present results on a UV photodetector fabricated using a flexible ZnO nanowire sheet embedded in polydimethylsiloxane (PDMS), a gas-permeable polymer, showing reproducible UV photoresponse and enhanced photoconduction. PDMS coating results in a reduced response speed compared to that of a ZnO nanowire film in air. The rising speed is slightly reduced, while the decay time is prolonged by about a factor of four. We conclude that oxygen molecules diffusing in PDMS are responsible for the UV photoresponse

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Studies of molecular evolutionary rates have yielded a wide range of rate estimates for various genes and taxa. Recent studies based on population-level and pedigree data have produced remarkably high estimates of mutation rate, which strongly contrast with substitution rates inferred in phylogenetic (species-level) studies. Using Bayesian analysis with a relaxed-clock model, we estimated rates for three groups of mitochondrial data: avian protein-coding genes, primate protein-coding genes, and primate d-loop sequences. In all three cases, we found a measurable transition between the high, short-term (<1–2 Myr) mutation rate and the low, long-term substitution rate. The relationship between the age of the calibration and the rate of change can be described by a vertically translated exponential decay curve, which may be used for correcting molecular date estimates. The phylogenetic substitution rates in mitochondria are approximately 0.5% per million years for avian protein-coding sequences and 1.5% per million years for primate protein-coding and d-loop sequences. Further analyses showed that purifying selection offers the most convincing explanation for the observed relationship between the estimated rate and the depth of the calibration. We rule out the possibility that it is a spurious result arising from sequence errors, and find it unlikely that the apparent decline in rates over time is caused by mutational saturation. Using a rate curve estimated from the d-loop data, several dates for last common ancestors were calculated: modern humans and Neandertals (354 ka; 222–705 ka), Neandertals (108 ka; 70–156 ka), and modern humans (76 ka; 47–110 ka). If the rate curve for a particular taxonomic group can be accurately estimated, it can be a useful tool for correcting divergence date estimates by taking the rate decay into account. Our results show that it is invalid to extrapolate molecular rates of change across different evolutionary timescales, which has important consequences for studies of populations, domestication, conservation genetics, and human evolution.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Objectives To evaluate differences among patients with different clinical features of ALS, we used our Bayesian method of motor unit number estimation (MUNE). Methods We performed serial MUNE studies on 42 subjects who fulfilled the diagnostic criteria for ALS during the course of their illness. Subjects were classified into three subgroups according to whether they had typical ALS (with upper and lower motor neurone signs) or had predominantly upper motor neurone weakness with only minor LMN signs, or predominantly lower motor neurone weakness with only minor UMN signs. In all subjects we calculated the half life of MUs, defined as the expected time for the number of MUs to halve, in one or more of the abductor digiti minimi (ADM), abductor pollicis brevis (APB) and extensor digitorum brevis (EDB) muscles. Results The mean half life of MUs was less in subjects who had typical ALS with both upper and lower motor neurone signs than in those with predominantly upper motor neurone weakness or predominantly lower motor neurone weakness. In 18 subjects we analysed the estimated size of the MUs and demonstrated the appearance of large MUs in subjects with upper or lower motor neurone predominant weakness. We found that the appearance of large MUs was correlated with the half life of MUs. Conclusions Patients with different clinical features of ALS have different rates of loss and different sizes of MUs. Significance: These findings could indicate differences in disease pathogenesis.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

We derive a semianalytical model to describe the interaction of a single photon emitter and a collection of arbitrarily shaped metal nanoparticles. The theory treats the metal nanoparticles classically within the electrostatic eigenmode method, wherein the surface plasmon resonances of collections of nanoparticles are represented by the hybridization of the plasmon modes of the noninteracting particles. The single photon emitter is represented by a quantum mechanical two-level system that exhibits line broadening due to a finite spontaneous decay rate. Plasmon-emitter coupling is described by solving the resulting Bloch equations. We illustrate the theory by studying model systems consisting of a single emitter coupled to one, two, and three nanoparticles, and we also compare the predictions of our model to published experimental data. ©2012 American Physical Society.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Herbivory is generally regarded as negatively impacting on host plant fitness. Frugivorous insects, which feed directly on plant reproductive tissues, are predicted to be particularly damaging to hosts. We tested this prediction with the fruit fly, Bactrocera tryoni, by recording the impact of larval feeding on two direct (seed number and germination) and two indirect (fruit decay rate and attraction/deterrence of vertebrate frugivores) measures of host plant fitness. Experiments were done in the laboratory, glasshouse and tropical rainforest. We found no negative impact of larval feeding on seed number or germination for three test plants: tomato, capsicum and eggplant. Further, larval feeding accelerated the initiation of decay and increased the final level of fruit decay in tomatoes, apples, pawpaw and pear, a result considered to be beneficial to the fruit. In rainforest studies, native rodents preferred infested apple and pears compared to uninfested control fruit; however, there were no differences observed between treatments for tomato and pawpaw. For our study fruits, these results demonstrate that fruit fly larval infestation has neutral or beneficial impacts on the host plant, an outcome which may be largely influenced by the physical properties of the host. These results may contribute to explaining why fruit flies have not evolved the same level of host specialization generally observed for other herbivore groups.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The Attentional Control Theory (ACT) proposes that high-anxious individuals maintain performance effectiveness (accuracy) at the expense of processing efficiency (response time), in particular, the two central executive functions of inhibition and shifting. In contrast, research has generally failed to consider the third executive function which relates to the function of updating. In the current study, seventy-five participants completed the Parametric Go/No-Go and n-back tasks, as well as the State-Trait Anxiety Inventory in order to explore the effects of anxiety on attention. Results indicated that anxiety lead to decay in processing efficiency, but not in performance effectiveness, across all three Central Executive functions (inhibition, set-shifting and updating). Interestingly, participants with high levels of trait anxiety also exhibited impaired performance effectiveness on the n-back task designed to measure the updating function. Findings are discussed in relation to developing a new model of ACT that also includes the role of preattentive processes and dual-task coordination when exploring the effects of anxiety on task performance.