46 resultados para scoring weights


Relevância:

10.00% 10.00%

Publicador:

Relevância:

10.00% 10.00%

Publicador:

Resumo:

AIMS: To compare the performance of ultrasound elastography with conventional ultrasound in the assessment of axillary lymph nodes in suspected breast cancer and whether ultrasound elastography as an adjunct to conventional ultrasound can increase the sensitivity of conventional ultrasound used alone. MATERIALS AND METHODS: Fifty symptomatic women with a sonographic suspicion for breast cancer underwent ultrasound elastography of the ipsilateral axilla concurrent with conventional ultrasound being performed as part of triple assessment. Elastograms were visually scored, strain measurements calculated and node area and perimeter measurements taken. Theoretical biopsy cut points were selected. The sensitivity, specificity, positive predictive value (PPV), and negative predictive values (NPV) were calculated and receiver operating characteristic (ROC) analysis was performed and compared for elastograms and conventional ultrasound images with surgical histology as the reference standard. RESULTS: The mean age of the women was 57 years. Twenty-nine out of 50 of the nodes were histologically negative on surgical histology and 21 were positive. The sensitivity, specificity, PPV, and NPV for conventional ultrasound were 76, 78, 70, and 81%, respectively; 90, 86, 83, and 93%, respectively, for visual ultrasound elastography; and for strain scoring, 100, 48, 58 and 100%, respectively. There was no significant difference between any of the node measurements CONCLUSIONS: Initial experience with ultrasound elastography of axillary lymph nodes, showed that it is more sensitive than conventional ultrasound in detecting abnormal nodes in the axilla in cases of suspected breast cancer. The specificity remained acceptable and ultrasound elastography used as an adjunct to conventional ultrasound has the potential to improve the performance of conventional ultrasound alone.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In this paper, we derive an EM algorithm for nonlinear state space models. We use it to estimate jointly the neural network weights, the model uncertainty and the noise in the data. In the E-step we apply a forwardbackward Rauch-Tung-Striebel smoother to compute the network weights. For the M-step, we derive expressions to compute the model uncertainty and the measurement noise. We find that the method is intrinsically very powerful, simple and stable.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Structured precision modelling is an important approach to improve the intra-frame correlation modelling of the standard HMM, where Gaussian mixture model with diagonal covariance are used. Previous work has all been focused on direct structured representation of the precision matrices. In this paper, a new framework is proposed, where the structure of the Cholesky square root of the precision matrix is investigated, referred to as Cholesky Basis Superposition (CBS). Each Cholesky matrix associated with a particular Gaussian distribution is represented as a linear combination of a set of Gaussian independent basis upper-triangular matrices. Efficient optimization methods are derived for both combination weights and basis matrices. Experiments on a Chinese dictation task showed that the proposed approach can significantly outperformed the direct structured precision modelling with similar number of parameters as well as full covariance modelling. © 2011 IEEE.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This paper studies on-chip communication with non-ideal heat sinks. A channel model is proposed where the variance of the additive noise depends on the weighted sum of the past channel input powers. It is shown that, depending on the weights, the capacity can be either bounded or unbounded in the input power. A necessary condition and a sufficient condition for the capacity to be bounded are presented. © 2007 IEEE.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Image convolution is conventionally approximated by the LTI discrete model. It is well recognized that the higher the sampling rate, the better is the approximation. However sometimes images or 3D data are only available at a lower sampling rate due to physical constraints of the imaging system. In this paper, we model the under-sampled observation as the result of combining convolution and subsampling. Because the wavelet coefficients of piecewise smooth images tend to be sparse and well modelled by tree-like structures, we propose the L0 reweighted-L2 minimization (L0RL2 ) algorithm to solve this problem. This promotes model-based sparsity by minimizing the reweighted L2 norm, which approximates the L0 norm, and by enforcing a tree model over the weights. We test the algorithm on 3 examples: a simple ring, the cameraman image and a 3D microscope dataset; and show that good results can be obtained. © 2010 IEEE.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The jetting of dilute polymer solutions in drop-on-demand printing is investigated. A quantitative model is presented which predicts three different regimes of behaviour depending upon the jet Weissenberg number Wi and extensibility of the polymer molecules. In regime I (Wi < ½) the polymer chains are relaxed and the fluid behaves in a Newtonian manner. In regime II (½ < Wi < L) where L is the extensibility of the polymer chain the fluid is viscoelastic, but the polymer do not reach their extensibility limit. In regime III (Wi > L) the chains remain fully extended in the thinning ligament. The maximum polymer concentration at which a jet of a certain speed can be formed scales with molecular weight to the power of (1-3ν), (1-6ν) and -2ν in the three regimes respectively, where ν is the solvent quality coefficient. Experimental data obtained with solutions of mono-disperse polystyrene in diethyl phthalate with molecular weights between 24 - 488 kDa, previous numerical simulations of this system, and previously published data for this and another linear polymer in a variety of “good” solvents, all show good agreement with the scaling predictions of the model.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

A simple and general design procedure is presented for the polarisation diversity of arbitrary conformal arrays; this procedure is based on the mathematical framework of geometric algebra and can be solved optimally using convex optimisation. Aside from being simpler and more direct than other derivations in the literature, this derivation is also entirely general in that it expresses the transformations in terms of rotors in geometric algebra which can easily be formulated for any arbitrary conformal array geometry. Convex optimisation has a number of advantages; solvers are widespread and freely available, the process generally requires a small number of iterations and a wide variety of constraints can be readily incorporated. The study outlines a two-step approach for addressing polarisation diversity in arbitrary conformal arrays: first, the authors obtain the array polarisation patterns using geometric algebra and secondly use a convex optimisation approach to find the optimal weights for the polarisation diversity problem. The versatility of this approach is illustrated via simulations of a 7×10 cylindrical conformal array. © 2012 The Institution of Engineering and Technology.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Language models (LMs) are often constructed by building multiple individual component models that are combined using context independent interpolation weights. By tuning these weights, using either perplexity or discriminative approaches, it is possible to adapt LMs to a particular task. This paper investigates the use of context dependent weighting in both interpolation and test-time adaptation of language models. Depending on the previous word contexts, a discrete history weighting function is used to adjust the contribution from each component model. As this dramatically increases the number of parameters to estimate, robust weight estimation schemes are required. Several approaches are described in this paper. The first approach is based on MAP estimation where interpolation weights of lower order contexts are used as smoothing priors. The second approach uses training data to ensure robust estimation of LM interpolation weights. This can also serve as a smoothing prior for MAP adaptation. A normalized perplexity metric is proposed to handle the bias of the standard perplexity criterion to corpus size. A range of schemes to combine weight information obtained from training data and test data hypotheses are also proposed to improve robustness during context dependent LM adaptation. In addition, a minimum Bayes' risk (MBR) based discriminative training scheme is also proposed. An efficient weighted finite state transducer (WFST) decoding algorithm for context dependent interpolation is also presented. The proposed technique was evaluated using a state-of-the-art Mandarin Chinese broadcast speech transcription task. Character error rate (CER) reductions up to 7.3 relative were obtained as well as consistent perplexity improvements. © 2012 Elsevier Ltd. All rights reserved.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This paper is concerned with the modelling of strategic interactions between the human driver and the vehicle active front steering (AFS) controller in a path-following task where the two controllers hold different target paths. The work is aimed at extending the use of mathematical models in representing driver steering behaviour in complicated driving situations. Two game theoretic approaches, namely linear quadratic game and non-cooperative model predictive control (non-cooperative MPC), are used for developing the driver-AFS interactive steering control model. For each approach, the open-loop Nash steering control solution is derived; the influences of the path-following weights, preview and control horizons, driver time delay and arm neuromuscular system (NMS) dynamics are investigated, and the CPU time consumed is recorded. It is found that the two approaches give identical time histories as well as control gains, while the non-cooperative MPC method uses much less CPU time. Specifically, it is observed that the introduction of weight on the integral of vehicle lateral displacement error helps to eliminate the steady-state path-following error; the increase in preview horizon and NMS natural frequency and the decline in time delay and NMS damping ratio improve the path-following accuracy. © 2013 Copyright Taylor and Francis Group, LLC.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This paper presents a complete system for expressive visual text-to-speech (VTTS), which is capable of producing expressive output, in the form of a 'talking head', given an input text and a set of continuous expression weights. The face is modeled using an active appearance model (AAM), and several extensions are proposed which make it more applicable to the task of VTTS. The model allows for normalization with respect to both pose and blink state which significantly reduces artifacts in the resulting synthesized sequences. We demonstrate quantitative improvements in terms of reconstruction error over a million frames, as well as in large-scale user studies, comparing the output of different systems. © 2013 IEEE.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In recent years, the healthcare sector has adopted the use of operational risk assessment tools to help understand the systems issues that lead to patient safety incidents. But although these problem-focused tools have improved the ability of healthcare organizations to identify hazards, they have not translated into measurable improvements in patient safety. One possible reason for this is a lack of support for the solution-focused process of risk control. This article describes a content analysis of the risk management strategies, policies, and procedures at all acute (i.e., hospital), mental health, and ambulance trusts (health service organizations) in the East of England area of the British National Health Service. The primary goal was to determine what organizational-level guidance exists to support risk control practice. A secondary goal was to examine the risk evaluation guidance provided by these trusts. With regard to risk control, we found an almost complete lack of useful guidance to promote good practice. With regard to risk evaluation, the trusts relied exclusively on risk matrices. A number of weaknesses were found in the use of this tool, especially related to the guidance for scoring an event's likelihood. We make a number of recommendations to address these concerns. The guidance assessed provides insufficient support for risk control and risk evaluation. This may present a significant barrier to the success of risk management approaches in improving patient safety. © 2013 Society for Risk Analysis.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

We generalize the standard many-body expansion technique that is used to approximate the total energy of a molecular system to enable the treatment of chemical reactions by quantum chemical techniques. By considering all possible assignments of atoms to monomer units of the many-body expansion and associating suitable weights with each, we construct a potential energy surface that is a smooth function of the nuclear positions. We derive expressions for this reactive many-body expansion energy and describe an algorithm for its evaluation, which scales polynomially with system size, and therefore will make the method feasible for future condensed phase simulations. We demonstrate the accuracy and smoothness of the resulting potential energy surface on a molecular dynamics trajectory of the protonated water hexamer, using the Hartree-Fock method for the many-body term and Møller-Plesset theory for the low order terms of the many-body expansion.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Fluid assessment methods, requiring small volumes and avoiding the need for jetting, are particularly useful in the design of functional fluids for inkjet printing applications. With the increasing use of complex (rather than Newtonian) fluids for manufacturing, single frequency fluid characterisation cannot reliably predict good jetting behaviour, owing to the range of shearing and extensional flow rates involved. However, the scope of inkjet fluid assessments (beyond achievement of a nominal viscosity within the print head design specification) is usually focused on the final application rather than the jetting processes. The experimental demonstration of the clear insufficiency of such approaches shows that fluid jetting can readily discriminate between fluids assessed as having similar LVE characterisation (within a factor of 2) for typical commercial rheometer measurements at shearing rates reaching 104rads-1.Jetting behaviour of weakly elastic dilute linear polystyrene solutions, for molecular weights of 110-488. kDa, recorded using high speed video was compared with recent results from numerical modelling and capillary thinning studies of the same solutions.The jetting images show behaviour ranging from near-Newtonian to "beads-on-a-string". The inkjet printing behaviour does not correlate simply with the measured extensional relaxation times or Zimm times, but may be consistent with non-linear extensibility L and the production of fully extended polymer molecules in the thinning jet ligament.Fluid test methods allowing a more complete characterisation of NLVE parameters are needed to assess inkjet printing feasibility prior to directly jetting complex fluids. At the present time, directly jetting such fluids may prove to be the only alternative. © 2014 The Authors.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

We present a method for producing dense Active Appearance Models (AAMs), suitable for video-realistic synthesis. To this end we estimate a joint alignment of all training images using a set of pairwise registrations and ensure that these pairwise registrations are only calculated between similar images. This is achieved by defining a graph on the image set whose edge weights correspond to registration errors and computing a bounded diameter minimum spanning tree (BDMST). Dense optical flow is used to compute pairwise registration and we introduce a flow refinement method to align small scale texture. Once registration between training images has been established we propose a method to add vertices to the AAM in a way that minimises error between the observed flow fields and a flow field interpolated between the AAM mesh points. We demonstrate a significant improvement in model compactness using the proposed method and show it dealing with cases that are problematic for current state-of-the-art approaches.