61 resultados para Multilevel Coding
em Aston University Research Archive
Resumo:
Using analytical methods of statistical mechanics, we analyse the typical behaviour of a multiple-input multiple-output (MIMO) Gaussian channel with binary inputs under low-density parity-check (LDPC) network coding and joint decoding. The saddle point equations for the replica symmetric solution are found in particular realizations of this channel, including a small and large number of transmitters and receivers. In particular, we examine the cases of a single transmitter, a single receiver and symmetric and asymmetric interference. Both dynamical and thermodynamical transitions from the ferromagnetic solution of perfect decoding to a non-ferromagnetic solution are identified for the cases considered, marking the practical and theoretical limits of the system under the current coding scheme. Numerical results are provided, showing the typical level of improvement/deterioration achieved with respect to the single transmitter/receiver result, for the various cases. © 2007 IOP Publishing Ltd.
Resumo:
In multilevel analyses, problems may arise when using Likert-type scales at the lowest level of analysis. Specifically, increases in variance should lead to greater censoring for the groups whose true scores fall at either end of the distribution. The current study used simulation methods to examine the influence of single-item Likert-type scale usage on ICC(1), ICC(2), and group-level correlations. Results revealed substantial underestimation of ICC(1) when using Likert-type scales with common response formats (e.g., 5 points). ICC(2) and group-level correlations were also underestimated, but to a lesser extent. Finally, the magnitude of underestimation was driven in large part to an interaction between Likert-type scale usage and the amounts of within- and between-group variance. © Sage Publications.
Resumo:
Marketing scholars are increasingly recognizing the importance of investigating phenomena at multiple levels. However, the analyses methods that are currently dominant within marketing may not be appropriate to dealing with multilevel or nested data structures. We identify the state of contemporary multilevel marketing research, finding that typical empirical approaches within marketing research may be less effective at explicitly taking account of multilevel data structures than those in other organizational disciplines. A Monte Carlo simulation, based on results from a previously published marketing study, demonstrates that different approaches to analysis of the same data can result in very different results (both in terms of power and effect size). The implication is that marketing scholars should be cautious when analyzing multilevel or other grouped data, and we provide a discussion and introduction to the use of hierarchical linear modeling for this purpose.
Resumo:
So far there has been scant empirical attention paid to the role of the sales force in the adoption of new brands in the early implementation stages. We test a framework of internal (sales manager and salespeople) brand adoption using an empirical multilevel study. Our findings suggest that the construct of expected customer demand (ECD) plays an important role in sales force brand adoption. First, ECD directly influences salespeople’s and sales managers’ brand adoption. Second, ECD serves as a cross-level moderator of new brand adoption transmission. We find the influence of sales managers’ brand adoption on salespeople’s brand adoption to be stronger when salespeople’s ECD is lower.
Resumo:
While the retrieval of existing designs to prevent unnecessary duplication of parts is a recognised strategy in the control of design costs the available techniques to achieve this, even in product data management systems, are limited in performance or require large resources. A novel system has been developed based on a new version of an existing coding system (CAMAC) that allows automatic coding of engineering drawings and their subsequent retrieval using a drawing of the desired component as the input. The ability to find designs using a detail drawing rather than textual descriptions is a significant achievement in itself. Previous testing of the system has demonstrated this capability but if a means could be found to find parts from a simple sketch then its practical application would be much more effective. This paper describes the development and testing of such a search capability using a database of over 3000 engineering components.
Resumo:
In designing new product the ability to retrieve drawings of existing components is important if costs are to be controlled by preventing unnecessary duplication if parts. Component coding and classification systems have been used successfully for these purposes but suffer from high operational costs and poor usability arising directly from the manual nature of the coding process itself. A new version of an existing coding system (CAMAC) has been developed to reduce costs by automatically coding engineering drawings. Usability is improved be supporting searches based on a drawing or sketch of the desired component. Test results from a database of several thousand drawings are presented.
Resumo:
The adoption of DRG coding may be seen as a central feature of the mechanisms of the health reforms in New Zealand. This paper presents a story of the use of DRG coding by describing the experience of one major health provider. The conventional literature portrays casemix accounting and medical coding systems as rational techniques for the collection and provision of information for management and contracting decisions/negotiations. Presents a different perspective on the implications and effects of the adoption of DRG technology, in particular the part played by DRG coding technology as a part of a casemix system is explicated from an actor network theory perspective. Medical coding and the DRG methodology will be argued to represent ``black boxes''. Such technological ``knowledge objects'' provide strong points in the networks which are so important to the processes of change in contemporary organisations.
Resumo:
Background Autologous chondrocyte implantation is a cell therapeutic approach for the treatment of chondral and osteochondral defects in the knee joint. The authors previously reported on the histologic and radiologic outcome of autologous chondrocyte implantation in the short- to midterm, which yields mixed results. Purpose The objective is to report on the clinical outcome of autologous chondrocyte implantation for the knee in the midterm to long term. Study Design Cohort study; Level of evidence, 3. Methods Eighty patients who had undergone autologous chondrocyte implantation of the knee with mid- to long-term follow-up were analyzed. The mean patient age was 34.6 years (standard deviation, 9.1 years), with 63 men and 17 women. Seventy-one patients presented with a focal chondral defect, with a median defect area of 4.1 cm2 and a maximum defect area of 20 cm2. The modified Lysholm score was used as a self-reporting clinical outcome measure to determine the following: (1) What is the typical pattern over time of clinical outcome after autologous chondrocyte implantation; and (2) Which patient-related predictors for the clinical outcome pattern can be used to improve patient selection for autologous chondrocyte implantation? Results The average follow-up time was 5 years (range, 2.7–9.3). Improvement in clinical outcome was found in 65 patients (81%), while 15 patients (19%) showed a decline in outcome. The median preoperative Lysholm score of 54 increased to a median of 78 points. The most rapid improvement in Lysholm score was over the 15-month period after operation, after which the Lysholm score remained constant for up to 9 years. The authors were unable to identify any patient-specific factors (ie, age, gender, defect size, defect location, number of previous operations, preoperative Lysholm score) that could predict the change in clinical outcome in the first 15 months. Conclusion Autologous chondrocyte implantation seems to provide a durable clinical outcome in those patients demonstrating success at 15 months after operation. Comparisons between other outcome measures of autologous chondrocyte implantation should be focused on the clinical status at 15 months after surgery. The patient-reported clinical outcome at 15 months is a major predictor of the mid- to long-term success of autologous chondrocyte implantation.
Resumo:
To make vision possible, the visual nervous system must represent the most informative features in the light pattern captured by the eye. Here we use Gaussian scale-space theory to derive a multiscale model for edge analysis and we test it in perceptual experiments. At all scales there are two stages of spatial filtering. An odd-symmetric, Gaussian first derivative filter provides the input to a Gaussian second derivative filter. Crucially, the output at each stage is half-wave rectified before feeding forward to the next. This creates nonlinear channels selectively responsive to one edge polarity while suppressing spurious or "phantom" edges. The two stages have properties analogous to simple and complex cells in the visual cortex. Edges are found as peaks in a scale-space response map that is the output of the second stage. The position and scale of the peak response identify the location and blur of the edge. The model predicts remarkably accurately our results on human perception of edge location and blur for a wide range of luminance profiles, including the surprising finding that blurred edges look sharper when their length is made shorter. The model enhances our understanding of early vision by integrating computational, physiological, and psychophysical approaches. © ARVO.
Resumo:
A multi-scale model of edge coding based on normalized Gaussian derivative filters successfully predicts perceived scale (blur) for a wide variety of edge profiles [Georgeson, M. A., May, K. A., Freeman, T. C. A., & Hesse, G. S. (in press). From filters to features: Scale-space analysis of edge and blur coding in human vision. Journal of Vision]. Our model spatially differentiates the luminance profile, half-wave rectifies the 1st derivative, and then differentiates twice more, to give the 3rd derivative of all regions with a positive gradient. This process is implemented by a set of Gaussian derivative filters with a range of scales. Peaks in the inverted normalized 3rd derivative across space and scale indicate the positions and scales of the edges. The edge contrast can be estimated from the height of the peak. The model provides a veridical estimate of the scale and contrast of edges that have a Gaussian integral profile. Therefore, since scale and contrast are independent stimulus parameters, the model predicts that the perceived value of either of these parameters should be unaffected by changes in the other. This prediction was found to be incorrect: reducing the contrast of an edge made it look sharper, and increasing its scale led to a decrease in the perceived contrast. Our model can account for these effects when the simple half-wave rectifier after the 1st derivative is replaced by a smoothed threshold function described by two parameters. For each subject, one pair of parameters provided a satisfactory fit to the data from all the experiments presented here and in the accompanying paper [May, K. A. & Georgeson, M. A. (2007). Added luminance ramp alters perceived edge blur and contrast: A critical test for derivative-based models of edge coding. Vision Research, 47, 1721-1731]. Thus, when we allow for the visual system's insensitivity to very shallow luminance gradients, our multi-scale model can be extended to edge coding over a wide range of contrasts and blurs. © 2007 Elsevier Ltd. All rights reserved.
Resumo:
In many models of edge analysis in biological vision, the initial stage is a linear 2nd derivative operation. Such models predict that adding a linear luminance ramp to an edge will have no effect on the edge's appearance, since the ramp has no effect on the 2nd derivative. Our experiments did not support this prediction: adding a negative-going ramp to a positive-going edge (or vice-versa) greatly reduced the perceived blur and contrast of the edge. The effects on a fairly sharp edge were accurately predicted by a nonlinear multi-scale model of edge processing [Georgeson, M. A., May, K. A., Freeman, T. C. A., & Hesse, G. S. (in press). From filters to features: Scale-space analysis of edge and blur coding in human vision. Journal of Vision], in which a half-wave rectifier comes after the 1st derivative filter. But we also found that the ramp affected perceived blur more profoundly when the edge blur was large, and this greater effect was not predicted by the existing model. The model's fit to these data was much improved when the simple half-wave rectifier was replaced by a threshold-like transducer [May, K. A. & Georgeson, M. A. (2007). Blurred edges look faint, and faint edges look sharp: The effect of a gradient threshold in a multi-scale edge coding model. Vision Research, 47, 1705-1720.]. This modified model correctly predicted that the interaction between ramp gradient and edge scale would be much larger for blur perception than for contrast perception. In our model, the ramp narrows an internal representation of the gradient profile, leading to a reduction in perceived blur. This in turn reduces perceived contrast because estimated blur plays a role in the model's estimation of contrast. Interestingly, the model predicts that analogous effects should occur when the width of the window containing the edge is made narrower. This has already been confirmed for blur perception; here, we further support the model by showing a similar effect for contrast perception. © 2007 Elsevier Ltd. All rights reserved.