35 resultados para Processing methods
Resumo:
We discuss the Application of TAP mean field methods known from Statistical Mechanics of disordered systems to Bayesian classification with Gaussian processes. In contrast to previous applications, no knowledge about the distribution of inputs is needed. Simulation results for the Sonar data set are given.
Resumo:
Background & Aims: Esophageal hypersensitivity is thought to be important in the generation and maintenance of symptoms in noncardiac chest pain (NCCP). In this study, we explored the neurophysiologic basis of esophageal hypersensitivity in a cohort of NCCP patients. Methods: We studied 12 healthy controls (9 women; mean age, 37.1 ± 8.7 y) and 32 NCCP patients (23 women; mean age, 47.2 ± 10 y). All had esophageal manometry, esophageal evoked potentials to electrical stimulation, and NCCP patients had 24-hour ambulatory pH testing. Results: The NCCP patients had reduced pain thresholds (PT) (72.1 ± 19.4 vs 54.2 ± 23.6, P = .02) and increased P1 latencies (P1 = 105.5 ± 11.1 vs 118.1 ± 23.4, P = .02). Subanalysis showed that the NCCP group could be divided into 3 distinct phenotypic classifications. Group 1 had reduced pain thresholds in conjunction with normal/reduced latency P1 latencies (n = 9). Group 2 had reduced pain thresholds in conjunction with increased (>2.5 SD) P1 latencies (n = 7), and group 3 had normal pain thresholds in conjunction with either normal (n = 10) or increased (>2.5 SD, n = 3) P1 latencies. Conclusions: Normal esophageal evoked potential latencies with reduced PT, as seen in group 1 patients, is indicative of enhanced afferent transmission and therefore increased esophageal afferent pathway sensitivity. Increased esophageal evoked potential latencies with reduced PT in group 2 patients implies normal afferent transmission to the cortex but heightened secondary cortical processing of this information, most likely owing to psychologic factors such as hypervigilance. This study shows that NCCP patients with esophageal hypersensitivity may be subclassified into distinct phenotypic subclasses based on sensory responsiveness and objective neurophysiologic profiles. © 2006 by the American Gastroenterological Association.
Resumo:
Background/Aims: Positron emission tomography has been applied to study cortical activation during human swallowing, but employs radio-isotopes precluding repeated experiments and has to be performed supine, making the task of swallowing difficult. Here we now describe Synthetic Aperture Magnetometry (SAM) as a novel method of localising and imaging the brain's neuronal activity from magnetoencephalographic (MEG) signals to study the cortical processing of human volitional swallowing in the more physiological prone position. Methods: In 3 healthy male volunteers (age 28–36), 151-channel whole cortex MEG (Omega-151, CTF Systems Inc.) was recorded whilst seated during the conditions of repeated volitional wet swallowing (5mls boluses at 0.2Hz) or rest. SAM analysis was then performed using varying spatial filters (5–60Hz) before co-registration with individual MRI brain images. Activation areas were then identified using standard sterotactic space neuro-anatomical maps. In one subject repeat studies were performed to confirm the initial study findings. Results: In all subjects, cortical activation maps for swallowing could be generated using SAM, the strongest activations being seen with 10–20Hz filter settings. The main cortical activations associated with swallowing were in: sensorimotor cortex (BA 3,4), insular cortex and lateral premotor cortex (BA 6,8). Of relevance, each cortical region displayed consistent inter-hemispheric asymmetry, to one or other hemisphere, this being different for each region and for each subject. Intra-subject comparisons of activation localisation and asymmetry showed impressive reproducibility. Conclusion: SAM analysis using MEG is an accurate, repeatable, and reproducible method for studying the brain processing of human swallowing in a more physiological manner and provides novel opportunities for future studies of the brain-gut axis in health and disease.
Resumo:
In recent work we have developed a novel variational inference method for partially observed systems governed by stochastic differential equations. In this paper we provide a comparison of the Variational Gaussian Process Smoother with an exact solution computed using a Hybrid Monte Carlo approach to path sampling, applied to a stochastic double well potential model. It is demonstrated that the variational smoother provides us a very accurate estimate of mean path while conditional variance is slightly underestimated. We conclude with some remarks as to the advantages and disadvantages of the variational smoother. © 2008 Springer Science + Business Media LLC.
Resumo:
One of the major problems associated with communication via a loudspeaking telephone (LST) is that, using analogue processing, duplex transmission is limited to low-loss lines and produces a low acoustic output. An architectural for an instrument has been developed and tested, which uses digital signal processing to provide duplex transmission between a LST and a telopnone handset over most of the B.T. network. Digital adaptive-filters are used in the duplex LST to cancel coupling between the loudspeaker and microphone, and across the transmit to receive paths of the 2-to-4-wire converter. Normal movement of a person in the acoustic path causes a loss of stability by increasing the level of coupling from the loudspeaker to the microphone, since there is a lag associated the adaptive filters learning about a non-stationary path, Control of the loop stability and the level of sidetone heard by the hadset user is by a microprocessoe, which continually monitors the system and regulates the gain. The result is a system which offers the best compromise available based on a set of measured parameters.A theory has been developed which gives the loop stability requirements based on the error between the parameters of the filter and those of the unknown path. The programme to develope a low-cost adaptive filter in LST produced a low-cost adaptive filter in LST produced a unique architecture which has a number of features not available in any similar system. These include automatic compensation for the rate of adaptation over a 36 dB range of output level, , 4 rates of adaptation (with a maximum of 465 dB/s), plus the ability to cascade up to 4 filters without loss o performance. A complex story has been developed to determine the adptation which can be achieved using finite-precision arithmatic. This enabled the development of an architecture which distributed the normalisation required to achieve optimum rate of adaptation over the useful input range. Comparison of theory and measurement for the adaptive filter show very close agreement. A single experimental LST was built and tested on connections to hanset telephones over the BT network. The LST demonstrated that duplex transmission was feasible using signal processing and produced a more comfortable means of communication beween people than methods emplying deep voice-switching to regulate the local-loop gain. Although, with the current level of processing power, it is not a panacea and attention must be directed toward the physical acoustic isolation between loudspeaker and microphone.
Resumo:
With the extensive use of pulse modulation methods in telecommunications, much work has been done in the search for a better utilisation of the transmission channel.The present research is an extension of these investigations. A new modulation method, 'Variable Time-Scale Information Processing', (VTSIP), is proposed.The basic principles of this system have been established, and the main advantages and disadvantages investigated. With the proposed system, comparison circuits detect the instants at which the input signal voltage crosses predetermined amplitude levels.The time intervals between these occurrences are measured digitally and the results are temporarily stored, before being transmitted.After reception, an inverse process enables the original signal to be reconstituted.The advantage of this system is that the irregularities in the rate of information contained in the input signal are smoothed out before transmission, allowing the use of a smaller transmission bandwidth. A disadvantage of the system is the time delay necessarily introduced by the storage process.Another disadvantage is a type of distortion caused by the finite store capacity.A simulation of the system has been made using a standard speech signal, to make some assessment of this distortion. It is concluded that the new system should be an improvement on existing pulse transmission systems, allowing the use of a smaller transmission bandwidth, but introducing a time delay.
Resumo:
The main objectives of this research were to develop optimised chemical compositions and reactive processing conditions for grafting a functional monomer maleic anhydride (MA) in polypropylene (PP), ethylene propylene diene monomer (EPDM) and mixtures of PP-EPDM, and to optimise synthetic routes for production of PP/EPDM copolymers for the purpose of compatibilisation of PP/EPDM blends. The MA-functionalisation was achieved using an internal mixer in the presence of low concentrations (less than 0.01 molar ratio) of a free radical initiator. Various methods were used to purify MA-functionalised PP and the grafting yield was determined using either FTIR or titrametry. The grafting yield of MA alone, which due to its low free-radical reactivity towards polymer macroradicals, was accompanied by severe degradation in the case of PP and crosslinking for EPDM. In the case of MA-functionalised PP/EPDM, both degradation and crosslinking occurred though not to a great extent. The use of tri-functional coagents e.g. trimethylopropane triacrylates (TRIS) with MA, led to high improvement of the grafting yield of MA on the polymers. This is almost certainly due to high free-radical activity of TRIS leading to copolymerisation of MA and TRIS which was followed by grafting of the copolymer onto the polymer backbone. In the case of PP, the use of coagent was also found to reduce the polymer degradation. PP/EPDM copolymers with optimum tensile properties were synthesised using a 'one-step' continues reactive processing procedure. This was achieved firstly by functionalisation of a mixture of PP (higher w/w ratio) and EPDM (low w/w ratio) with MA, in the presence of the coagent TRIS and a small concentration of a free radical initiator. This was then followed by an imidisation reaction with the interlinking agent hexamethylene diamine (HEMDA). Small amount of copolymers, up to 5 phr, which were interlinked with up to 15 phr of HEMDA, were sufficient to compatibilise PP/EPDM75/25 blends resulting in excellent tensile properties compared to binary PP/EPDM 75/25 blend. Improvement in blend's compatibility and phases-stabilisation (observed through tensile and SEM analysis) was shown in all cases with significant interphases adhesion improvement between PP and EPDM, and reduction in domain size across the fractured surface indicating efficient distribution of the compatibiliser.
Resumo:
Reproducible preparation of a number of modified clay and clay~like materials by both conventional and microwave-assisted chemistry, and their subsequent characterisation, has been achieved, These materials are designed as hydrocracking catalysts for the upgrading of liquids obtained by the processing of coal. Contact with both coal derived liquids and heavy petroleum resids has demonstrated that these catalysts are superior to established proprietary catalysts in terms of both initial activity and deactivation resistance, Of particular activity were a chromium-pillared montmorillonite and a tin intercalated laponite, Layered Double Hydroxides (LDH's) have exhibited encouraging thermal stability. Development of novel methods for hydrocracking coal derived liquids, using a commercial microwave oven, modified reaction vessels and coal model compounds has been attempted. Whilst safe and reliable operation of a high pressure microwave "bomb" apparatus employing hydrogen, has been achieved, no hydrotreatment reactions occurred,
Resumo:
Introduction: The requirement of adjuvants in subunit protein vaccination is well known yet their mechanisms of action remain elusive. Of the numerous mechanisms suggested, cationic liposomes appear to fulfil at least three: the antigen depot effect, the delivery of antigen to antigen presenting cells (APCs) and finally the danger signal. We have investigated the role of antigen depot effect with the use of dual radiolabelling whereby adjuvant and antigen presence in tissues can be quantified. In our studies a range of cationic liposomes and different antigens were studied to determine the importance of physical properties such as liposome surface charge, antigen association and inherent lipid immunogenicity. More recently we have investigated the role of liposome size with the cationic liposome formulation DDA:TDB, composed of the cationic lipid dimethyldioctadecylammonium (DDA) and the synthetic mycobacterial glycolipid trehalose 6,6’-dibehenate (TDB). Vesicle size is a frequently investigated parameter which is known to result in different routes of endocytosis. It has been postulated that targeting different routes leads to different intracellular signaling pathway activation and it is certainly true that numerous studies have shown vesicle size to have an effect on the resulting immune responses (e.g. Th1 vs. Th2). Aim: To determine the effect of cationic liposome size on the biodistribution of adjuvant and antigen, the ensuing humoral and cell-mediated immune responses and the uptake and activation of antigen by APCs including macrophages and dendritic cells. Methods: DDA:TDB liposomes were made to three different sizes (~ 0.2, 0.5 and 2 µm) followed by the addition of tuberculosis antigen Ag85B-ESAT-6 therefore resulting in surface adsorption. Liposome formulations were injected into Balb/c or C57Bl/6 mice via the intramuscular route. The biodistribution of the liposome formulations was followed using dual radiolabelling. Tissues including muscle from the site of injection and local draining lymph nodes were removed and liposome and antigen presence quantified. Mice were also immunized with the different vaccine formulations and cytokine production (from Ag85B-ESAT-6 restimulated splenocytes) and antibody presence in blood assayed. Furthermore, splenocyte proliferation after restimulating with Ag85B-ESAT-6 was measured. Finally, APCs were compared for their ability to endocytose vaccine formulations and the effect this had on the maturation status of the cell populations was compared. Flow cytometry and fluorescence labelling was used to investigate maturation marker up-regulation and efficacy of phagocytosis. Results: Our results show that for an efficient Ag85B-ESAT-6 antigen depot at the injection site, liposomes composed of DDA and TDB are required. There is no significant change in the presence of liposome or antigen at 6hrs or 24hrs p.i, nor does liposome size have an effect. Approximately 0.05% of the injected liposome dose is detected in the local draining lymph node 24hrs p.i however protein presence is low (<0.005% dose). Preliminary in vitro data shows liposome and antigen endocytosis by macrophages; further studies on this will be presented in addition to the results of the immunisation study.
Resumo:
Recent advances in technology have produced a significant increase in the availability of free sensor data over the Internet. With affordable weather monitoring stations now available to individual meteorology enthusiasts a reservoir of real time data such as temperature, rainfall and wind speed can now be obtained for most of the United States and Europe. Despite the abundance of available data, obtaining useable information about the weather in your local neighbourhood requires complex processing that poses several challenges. This paper discusses a collection of technologies and applications that harvest, refine and process this data, culminating in information that has been tailored toward the user. In this case we are particularly interested in allowing a user to make direct queries about the weather at any location, even when this is not directly instrumented, using interpolation methods. We also consider how the uncertainty that the interpolation introduces can then be communicated to the user of the system, using UncertML, a developing standard for uncertainty representation.
Resumo:
Recent advances in technology have produced a significant increase in the availability of free sensor data over the Internet. With affordable weather monitoring stations now available to individual meteorology enthusiasts a reservoir of real time data such as temperature, rainfall and wind speed can now be obtained for most of the United States and Europe. Despite the abundance of available data, obtaining useable information about the weather in your local neighbourhood requires complex processing that poses several challenges. This paper discusses a collection of technologies and applications that harvest, refine and process this data, culminating in information that has been tailored toward the user. In this case we are particularly interested in allowing a user to make direct queries about the weather at any location, even when this is not directly instrumented, using interpolation methods. We also consider how the uncertainty that the interpolation introduces can then be communicated to the user of the system, using UncertML, a developing standard for uncertainty representation.
Resumo:
Removing noise from signals which are piecewise constant (PWC) is a challenging signal processing problem that arises in many practical scientific and engineering contexts. In the first paper (part I) of this series of two, we presented background theory building on results from the image processing community to show that the majority of these algorithms, and more proposed in the wider literature, are each associated with a special case of a generalized functional, that, when minimized, solves the PWC denoising problem. It shows how the minimizer can be obtained by a range of computational solver algorithms. In this second paper (part II), using this understanding developed in part I, we introduce several novel PWC denoising methods, which, for example, combine the global behaviour of mean shift clustering with the local smoothing of total variation diffusion, and show example solver algorithms for these new methods. Comparisons between these methods are performed on synthetic and real signals, revealing that our new methods have a useful role to play. Finally, overlaps between the generalized methods of these two papers and others such as wavelet shrinkage, hidden Markov models, and piecewise smooth filtering are touched on.
Resumo:
Recent advances in our ability to watch the molecular and cellular processes of life in action-such as atomic force microscopy, optical tweezers and Forster fluorescence resonance energy transfer-raise challenges for digital signal processing (DSP) of the resulting experimental data. This article explores the unique properties of such biophysical time series that set them apart from other signals, such as the prevalence of abrupt jumps and steps, multi-modal distributions and autocorrelated noise. It exposes the problems with classical linear DSP algorithms applied to this kind of data, and describes new nonlinear and non-Gaussian algorithms that are able to extract information that is of direct relevance to biological physicists. It is argued that these new methods applied in this context typify the nascent field of biophysical DSP. Practical experimental examples are supplied.
Resumo:
This research investigates specific ash control methods to limit inorganic content within biomass prior to fast pyrolysis and effect of specific ash components on fast pyrolysis processing, mass balance yields and bio-oil quality and stability. Inorganic content in miscanthus was naturally reduced over the winter period from June (7.36 wt. %) to February (2.80 wt. %) due to a combination of senescence and natural leaching from rain water. September harvest produced similar mass balance yields, bio-oil quality and stability compared to February harvest (conventional harvest), but nitrogen content in above ground crop was to high (208 kg ha.-1) to maintain sustainable crop production. Deionised water, 1.00% HCl and 0.10% Triton X-100 washes were used to reduce inorganic content of miscanthus. Miscanthus washed with 0.10% Triton X-100 resulted in the highest total liquid yield (76.21 wt. %) and lowest char and reaction water yields (9.77 wt. % and 8.25 wt. % respectively). Concentrations of Triton X-100 were varied to study further effects on mass balance yields and bio-oil stability. All concentrations of Triton X-100 increased total liquid yield and decreased char and reaction water yields compared to untreated miscanthus. In terms of bio-oil stability 1.00% Triton X-100 produced the most stable bio-oil with lowest viscosity index (2.43) and lowest water content index (1.01). Beech wood was impregnated with potassium and phosphorus resulting in lower liquid yields and increased char and gas yields due to their catalytic effect on fast pyrolysis product distribution. Increased potassium and phosphorus concentrations produced less stable bio-oils with viscosity and water content indexes increasing. Fast pyrolysis processing of phosphorus impregnated beech wood was problematic as the reactor bed material agglomerated into large clumps due to char formation within the reactor, affecting fluidisation and heat transfer.
Resumo:
The research presented in this thesis was developed as part of DIBANET, an EC funded project aiming to develop an energetically self-sustainable process for the production of diesel miscible biofuels (i.e. ethyl levulinate) via acid hydrolysis of selected biomass feedstocks. Three thermal conversion technologies, pyrolysis, gasification and combustion, were evaluated in the present work with the aim of recovering the energy stored in the acid hydrolysis solid residue (AHR). Mainly consisting of lignin and humins, the AHR can contain up to 80% of the energy in the original feedstock. Pyrolysis of AHR proved unsatisfactory, so attention focussed on gasification and combustion with the aim of producing heat and/or power to supply the energy demanded by the ethyl levulinate production process. A thermal processing rig consisting on a Laminar Entrained Flow Reactor (LEFR) equipped with solid and liquid collection and online gas analysis systems was designed and built to explore pyrolysis, gasification and air-blown combustion of AHR. Maximum liquid yield for pyrolysis of AHR was 30wt% with volatile conversion of 80%. Gas yield for AHR gasification was 78wt%, with 8wt% tar yields and conversion of volatiles close to 100%. 90wt% of the AHR was transformed into gas by combustion, with volatile conversions above 90%. 5volO2%-95vol%N2 gasification resulted in a nitrogen diluted, low heating value gas (2MJ/m3). Steam and oxygen-blown gasification of AHR were additionally investigated in a batch gasifier at KTH in Sweden. Steam promoted the formation of hydrogen (25vol%) and methane (14vol%) improving the gas heating value to 10MJ/m3, below the typical for steam gasification due to equipment limitations. Arrhenius kinetic parameters were calculated using data collected with the LEFR to provide reaction rate information for process design and optimisation. Activation energy (EA) and pre-exponential factor (ko in s-1) for pyrolysis (EA=80kJ/mol, lnko=14), gasification (EA=69kJ/mol, lnko=13) and combustion (EA=42kJ/mol, lnko=8) were calculated after linearly fitting the data using the random pore model. Kinetic parameters for pyrolysis and combustion were also determined by dynamic thermogravimetric analysis (TGA), including studies of the original biomass feedstocks for comparison. Results obtained by differential and integral isoconversional methods for activation energy determination were compared. Activation energy calculated by the Vyazovkin method was 103-204kJ/mol for pyrolysis of untreated feedstocks and 185-387kJ/mol for AHRs. Combustion activation energy was 138-163kJ/mol for biomass and 119-158 for AHRs. The non-linear least squares method was used to determine reaction model and pre-exponential factor. Pyrolysis and combustion of biomass were best modelled by a combination of third order reaction and 3 dimensional diffusion models, while AHR decomposed following the third order reaction for pyrolysis and the 3 dimensional diffusion for combustion.