30 resultados para two-Gaussian mixture model
Resumo:
National guidance and clinical guidelines recommended multidisciplinary teams (MDTs) for cancer services in order to bring specialists in relevant disciplines together, ensure clinical decisions are fully informed, and to coordinate care effectively. However, the effectiveness of cancer teams was not previously evaluated systematically. A random sample of 72 breast cancer teams in England was studied (548 members in six core disciplines), stratified by region and caseload. Information about team constitution, processes, effectiveness, clinical performance, and members' mental well-being was gathered using appropriate instruments. Two input variables, team workload (P=0.009) and the proportion of breast care nurses (P=0.003), positively predicted overall clinical performance in multivariate analysis using a two-stage regression model. There were significant correlations between individual team inputs, team composition variables, and clinical performance. Some disciplines consistently perceived their team's effectiveness differently from the mean. Teams with shared leadership of their clinical decision-making were most effective. The mental well-being of team members appeared significantly better than in previous studies of cancer clinicians, the NHS, and the general population. This study established that team composition, working methods, and workloads are related to measures of effectiveness, including the quality of clinical care. © 2003 Cancer Research UK.
Resumo:
In experiments reported elsewhere at this conference, we have revealed two striking results concerning binocular interactions in a masking paradigm. First, at low mask contrasts, a dichoptic masking grating produces a small facilitatory effect on the detection of a similar test grating. Second, the psychometric slope for dichoptic masking starts high (Weibull ß~4) at detection threshold, becomes low (ß~1.2) in the facilitatory region, and then unusually steep at high mask contrasts (ß~5.5). Neither of these results is consistent with Legge's (1984 Vision Research 24 385 - 394) model of binocular summation, but they are predicted by a two-stage gain control model in which interocular suppression precedes binocular summation. Here, we pose a further challenge for this model by using a 'twin-mask' paradigm (cf Foley, 1994 Journal of the Optical Society of America A 11 1710 - 1719). In 2AFC experiments, observers detected a patch of grating (1 cycle deg-1, 200 ms) presented to one eye in the presence of a pedestal in the same eye and a spatially identical mask in the other eye. The pedestal and mask contrasts varied independently, producing a two-dimensional masking space in which the orthogonal axes (10X10 contrasts) represent conventional dichoptic and monocular masking. The resulting surface (100 thresholds) confirmed and extended the observations above, and fixed the six parameters in the model, which fitted the data well. With no adjustment of parameters, the model described performance in a further experiment where mask and test were presented to both eyes. Moreover, in both model and data, binocular summation was greater than a factor of v2 at detection threshold. We conclude that this two-stage nonlinear model, with interocular suppression, gives a good account of early binocular processes in the perception of contrast. [Supported by EPSRC Grant Reference: GR/S74515/01]
Resumo:
This work introduces a new variational Bayes data assimilation method for the stochastic estimation of precipitation dynamics using radar observations for short term probabilistic forecasting (nowcasting). A previously developed spatial rainfall model based on the decomposition of the observed precipitation field using a basis function expansion captures the precipitation intensity from radar images as a set of ‘rain cells’. The prior distributions for the basis function parameters are carefully chosen to have a conjugate structure for the precipitation field model to allow a novel variational Bayes method to be applied to estimate the posterior distributions in closed form, based on solving an optimisation problem, in a spirit similar to 3D VAR analysis, but seeking approximations to the posterior distribution rather than simply the most probable state. A hierarchical Kalman filter is used to estimate the advection field based on the assimilated precipitation fields at two times. The model is applied to tracking precipitation dynamics in a realistic setting, using UK Met Office radar data from both a summer convective event and a winter frontal event. The performance of the model is assessed both traditionally and using probabilistic measures of fit based on ROC curves. The model is shown to provide very good assimilation characteristics, and promising forecast skill. Improvements to the forecasting scheme are discussed
Resumo:
The paper presents a comparison between the different drag models for granular flows developed in the literature and the effect of each one of them on the fast pyrolysis of wood. The process takes place on an 100 g/h lab scale bubbling fluidized bed reactor located at Aston University. FLUENT 6.3 is used as the modeling framework of the fluidized bed hydrodynamics, while the fast pyrolysis of the discrete wood particles is incorporated as an external user defined function (UDF) hooked to FLUENT’s main code structure. Three different drag models for granular flows are compared, namely the Gidaspow, Syamlal O’Brien, and Wen-Yu, already incorporated in FLUENT’s main code, and their impact on particle trajectory, heat transfer, degradation rate, product yields, and char residence time is quantified. The Eulerian approach is used to model the bubbling behavior of the sand, which is treated as a continuum. Biomass reaction kinetics is modeled according to the literature using a two-stage, semiglobal model that takes into account secondary reactions.
Resumo:
With the increasing importance of Foreign Direct Investment (FDI), there have been substantial studies on this issue, both empirically and theoretically. However, most existing studies focus on either the impacts of FDI presence or the determinants of FDI inflows, ignoring the fact that inward FDI and economic development may simultaneously affect each other. This thesis sets out to examine the interactive effects between FDI and economic development. The whole thesis is composed of five chapters. Chapter One is an overall introduction to the thesis. Chapter Two presents a theoretical study and chapter Two and Three provide two empirical studies. Chapter Five concludes. Chapter Two presents a theoretical two-sector model that features the importance of human capital in attracting foreign investment. This model theoretically explains why FDI is more likely to occur among countries that are similar in terms of human capital and technology. On the other hand, MNCs must train local employees to work with firm-specific technology and hence improve the technological skills of local workers. In Chapter Two, an empirical model is constructed to detect whether the productivities of foreign and local firms impact each other. The model is tested on China’s data at the industry level. The results indicate that productivity growth of local and foreign firms are jointly determined. Evidence also suggests that the extent to which spillovers occur varies with difference technology levels of local firms. Chapter Four investigates the relationship between FDI and economic grown based on a panel of data for 84 countries over the period 1970-1999. Both equations of FDI inflow and GDP growth are examined. The results indicate that FDI not only directly promotes economic growth by itself, but also indirectly does so via its interaction terms. There is a strong positive interaction effect of FDI with human capital and a strong negative interaction effect of FDI with technology gap on economic growth in developing countries.
Resumo:
Cardiovascular disease (CVD) continues to be one of the top causes of mortality in the world. World Heart Organization (WHO) reported that in 2004, CVD contributed to almost 30% of death from estimated worldwide death figures of 58 million[1]. Heart failure treatment varies from lifestyle adjustment to heart transplantation; its aims are to reduce HF symptoms, prolong patient survival and minimize risk [2]. One alternative available in the market for HF treatment is Left Ventricular Assist Device (LVAD). Chronic Intermittent Mechanical Support (CIMS) device is a novel (LVAD) heart failure treatment using counterpulsation similar to Intra Aortic Balloon Pumps (IABP). However, the implantation site of the CIMS balloon is in the ascending aorta just distal to aortic valve contrasted with IABP in the descending aorta. Counterpulsation coupled with implantation close to the aortic valve enables comparable flow augmentation with reduced balloon volume. Two prototypes of the CIMS balloon were constructed using rapid prototyping: the straight-body model is a cylindrical tube with a silicone membrane lining with zero expansive compliance. The compliant-body model had a bulging structure that allowed the membrane to expand under native systolic pressure increasing the device’s static compliance to 1.5 mL/mmHg. This study examined the effect of device compliance and vascular compliance on counterpulsating flow augmentation. Both prototypes were tested on a two-element Windkessel model human mock circulatory loop (MCL). The devices were placed just distal to aortic valve and left coronary artery. The MCL mimicked HF with cardiac output of 3 L/min, left ventricular pressure of 85/15 mmHg, aortic pressure of 70/50 mmHg and left coronary artery flow rate of 66 mL/min. The mean arterial pressure (MAP) was calculated to be 57 mmHg. Arterial compliance was set to be1.25 mL/mmHg and 2.5 mL/mmHg. Inflation of the balloon was triggered at the dicrotic notch while deflation was at minimum aortic pressure prior to systole. Important haemodynamics parameters such as left ventricular pressure (LVP), aortic pressure (AoP), cardiac output (CO), left coronary artery flowrate (QcorMean), and dP (Peak aortic diastolic augmentation pressure – AoPmax ) were simultaneously recorded for both non-assisted mode and assisted mode. ANOVA was used to analyse the effect of both factors (balloon and arterial compliance) to flow augmentation. The results showed that for cardiac output and left coronary artery flowrate, there were significant difference between balloon and arterial compliance at p < 0.001. Cardiac output recorded maximum output at 18% for compliant body and stiff arterial compliance. Left coronary artery flowrate also recorded around 20% increase due to compliant body and stiffer arterial compliance. Resistance to blood ejection recorded highest difference for combination of straight body and stiffer arterial compliance. From these results it is clear that both balloon and arterial compliance are statistically significant factors for flow augmentation on peripheral artery and reduction of resistance. Although the result for resistance reduction was different from flow augmentation, these results serves as an important aspect which will influence the future design of the CIMS balloon and its control strategy. References: 1. Mathers C, Boerma T, Fat DM. The Global Burden of disease:2004 update. Geneva: World Heatlh Organization; 2008. 2. Jessup M, Brozena S. Heart Failure. N Engl J Med 2003;348:2007-18.
Resumo:
The main objective of the project is to enhance the already effective health-monitoring system (HUMS) for helicopters by analysing structural vibrations to recognise different flight conditions directly from sensor information. The goal of this paper is to develop a new method to select those sensors and frequency bands that are best for detecting changes in flight conditions. We projected frequency information to a 2-dimensional space in order to visualise flight-condition transitions using the Generative Topographic Mapping (GTM) and a variant which supports simultaneous feature selection. We created an objective measure of the separation between different flight conditions in the visualisation space by calculating the Kullback-Leibler (KL) divergence between Gaussian mixture models (GMMs) fitted to each class: the higher the KL-divergence, the better the interclass separation. To find the optimal combination of sensors, they were considered in pairs, triples and groups of four sensors. The sensor triples provided the best result in terms of KL-divergence. We also found that the use of a variational training algorithm for the GMMs gave more reliable results.
Resumo:
This paper draws upon activity theory- to analyse an empirical investigation of the micro practices of strategy in three UK universities. Activity theory provides a framework of four interactive components from which strategy emerges; the collective structures of the organization, the primary actors, in this research conceptualized as the top management team (TMT), the practical activities in which they interact and the strategic practices through which interaction is conducted. Using this framework, the paper focuses specifically on the formal strategic practices involved in direction setting, resource allocation, and monitoring and control. These strategic practices arc associated with continuity of strategic activity in one case study but are involved in the reinterpretation and change of strategic activity in the other two cases. We model this finding into activity theory-based typologies of the cases that illustrate the way that practices either distribute shared interpretations or mediate between contested interpretations of strategic activity. The typologies explain the relationships between strategic practices and continuity and change of strategy as practice. The paper concludes by linking activity theory to wider change literatures to illustrate its potential as an integrative methodological framework for examining the subjective and emergent processes through which strategic activity is constructed. © Blackwell Publishing Ltd 2003.
Resumo:
Optimal design for parameter estimation in Gaussian process regression models with input-dependent noise is examined. The motivation stems from the area of computer experiments, where computationally demanding simulators are approximated using Gaussian process emulators to act as statistical surrogates. In the case of stochastic simulators, which produce a random output for a given set of model inputs, repeated evaluations are useful, supporting the use of replicate observations in the experimental design. The findings are also applicable to the wider context of experimental design for Gaussian process regression and kriging. Designs are proposed with the aim of minimising the variance of the Gaussian process parameter estimates. A heteroscedastic Gaussian process model is presented which allows for an experimental design technique based on an extension of Fisher information to heteroscedastic models. It is empirically shown that the error of the approximation of the parameter variance by the inverse of the Fisher information is reduced as the number of replicated points is increased. Through a series of simulation experiments on both synthetic data and a systems biology stochastic simulator, optimal designs with replicate observations are shown to outperform space-filling designs both with and without replicate observations. Guidance is provided on best practice for optimal experimental design for stochastic response models. © 2013 Elsevier Inc. All rights reserved.
Resumo:
This thesis presents a two-dimensional water model investigation and development of a multiscale method for the modelling of large systems, such as virus in water or peptide immersed in the solvent. We have implemented a two-dimensional ‘Mercedes Benz’ (MB) or BN2D water model using Molecular Dynamics. We have studied its dynamical and structural properties dependence on the model’s parameters. For the first time we derived formulas to calculate thermodynamic properties of the MB model in the microcanonical (NVE) ensemble. We also derived equations of motion in the isothermal–isobaric (NPT) ensemble. We have analysed the rotational degree of freedom of the model in both ensembles. We have developed and implemented a self-consistent multiscale method, which is able to communicate micro- and macro- scales. This multiscale method assumes, that matter consists of the two phases. One phase is related to micro- and the other to macroscale. We simulate the macro scale using Landau Lifshitz-Fluctuating Hydrodynamics, while we describe the microscale using Molecular Dynamics. We have demonstrated that the communication between the disparate scales is possible without introduction of fictitious interface or approximations which reduce the accuracy of the information exchange between the scales. We have investigated control parameters, which were introduced to control the contribution of each phases to the matter behaviour. We have shown, that microscales inherit dynamical properties of the macroscales and vice versa, depending on the concentration of each phase. We have shown, that Radial Distribution Function is not altered and velocity autocorrelation functions are gradually transformed, from Molecular Dynamics to Fluctuating Hydrodynamics description, when phase balance is changed. In this work we test our multiscale method for the liquid argon, BN2D and SPC/E water models. For the SPC/E water model we investigate microscale fluctuations which are computed using advanced mapping technique of the small scales to the large scales, which was developed by Voulgarakisand et. al.
Resumo:
A new 3D implementation of a hybrid model based on the analogy with two-phase hydrodynamics has been developed for the simulation of liquids at microscale. The idea of the method is to smoothly combine the atomistic description in the molecular dynamics zone with the Landau-Lifshitz fluctuating hydrodynamics representation in the rest of the system in the framework of macroscopic conservation laws through the use of a single "zoom-in" user-defined function s that has the meaning of a partial concentration in the two-phase analogy model. In comparison with our previous works, the implementation has been extended to full 3D simulations for a range of atomistic models in GROMACS from argon to water in equilibrium conditions with a constant or a spatially variable function s. Preliminary results of simulating the diffusion of a small peptide in water are also reported.
Resumo:
A new generation of high-capacity WDM systems with extremely robust performance has been enabled by coherent transmission and digital signal processing. To facilitate widespread deployment of this technology, particularly in the metro space, new photonic components and subsystems are being developed to support cost-effective, compact, and scalable transceivers. We briefly review the recent progress in InP-based photonic components, and report numerical simulation results of an InP-based transceiver comprising a dual-polarization I/Q modulator and a commercial DSP ASIC. Predicted performance penalties due to the nonlinear response, lower bandwidth, and finite extinction ratio of these transceivers are less than 1 and 2 dB for 100-G PM-QPSK and 200-G PM-16QAM, respectively. Using the well-established Gaussian-Noise model, estimated system reach of 100-G PM-QPSK is greater than 600 km for typical ROADM-based metro-regional systems with internode losses up to 20 dB. © 1983-2012 IEEE.
Resumo:
In this talk we investigate the usage of spectrally shaped amplified spontaneous emission (ASE) in order to emulate highly dispersed wavelength division multiplexed (WDM) signals in an optical transmission system. Such a technique offers various simplifications to large scale WDM experiments. Not only does it offer a reduction in transmitter complexity, removing the need for multiple source lasers, it potentially reduces the test and measurement complexity by requiring only the centre channel of a WDM system to be measured in order to estimate WDM worst case performance. The use of ASE as a test and measurement tool is well established in optical communication systems and several measurement techniques will be discussed [1, 2]. One of the most prevalent uses of ASE is in the measurement of receiver sensitivity where ASE is introduced in order to degrade the optical signal to noise ratio (OSNR) and measure the resulting bit error rate (BER) at the receiver. From an analytical point of view noise has been used to emulate system performance, the Gaussian Noise model is used as an estimate of highly dispersed signals and has had consider- able interest [3]. The work to be presented here extends the use of ASE by using it as a metric to emulate highly dispersed WDM signals and in the process reduce WDM transmitter complexity and receiver measurement time in a lab environment. Results thus far have indicated [2] that such a transmitter configuration is consistent with an AWGN model for transmission, with modulation format complexity and nonlinearities playing a key role in estimating the performance of systems utilising the ASE channel emulation technique. We conclude this work by investigating techniques capable of characterising the nonlinear and damage limits of optical fibres and the resultant information capacity limits. REFERENCES McCarthy, M. E., N. Mac Suibhne, S. T. Le, P. Harper, and A. D. Ellis, “High spectral efficiency transmission emulation for non-linear transmission performance estimation for high order modulation formats," 2014 European Conference on IEEE Optical Communication (ECOC), 2014. 2. Ellis, A., N. Mac Suibhne, F. Gunning, and S. Sygletos, “Expressions for the nonlinear trans- mission performance of multi-mode optical fiber," Opt. Express, Vol. 21, 22834{22846, 2013. Vacondio, F., O. Rival, C. Simonneau, E. Grellier, A. Bononi, L. Lorcy, J. Antona, and S. Bigo, “On nonlinear distortions of highly dispersive optical coherent systems," Opt. Express, Vol. 20, 1022-1032, 2012.
Resumo:
In this paper, the problem of semantic place categorization in mobile robotics is addressed by considering a time-based probabilistic approach called dynamic Bayesian mixture model (DBMM), which is an improved variation of the dynamic Bayesian network. More specifically, multi-class semantic classification is performed by a DBMM composed of a mixture of heterogeneous base classifiers, using geometrical features computed from 2D laserscanner data, where the sensor is mounted on-board a moving robot operating indoors. Besides its capability to combine different probabilistic classifiers, the DBMM approach also incorporates time-based (dynamic) inferences in the form of previous class-conditional probabilities and priors. Extensive experiments were carried out on publicly available benchmark datasets, highlighting the influence of the number of time-slices and the effect of additive smoothing on the classification performance of the proposed approach. Reported results, under different scenarios and conditions, show the effectiveness and competitive performance of the DBMM.
Resumo:
In this work, the liquid-liquid and solid-liquid phase behaviour of ten aqueous pseudo-binary and three binary systems containing polyethylene glycol (PEG) 2050, polyethylene glycol 35000, aniline, N,N-dimethylaniline and water, in the temperature range 298.15-350.15 K and at ambient pressure of 0.1 MPa, was studied. The obtained temperature-composition phase diagrams showed that the only functional co-solvent was PEG2050 for aniline in water, while PEG35000 even showed a clear anti-solvent effect in the N,N-dimethylaniline aqueous system. The experimental solid-liquid equilibria (SLE) data have been correlated by the non-random two-liquid (NRTL) model, and the correlation results are in accordance with the experimental results.