946 resultados para Fundamentals in linear algebra
Resumo:
Efficient new Bayesian inference technique is employed for studying critical properties of the Ising linear perceptron and for signal detection in code division multiple access (CDMA). The approach is based on a recently introduced message passing technique for densely connected systems. Here we study both critical and non-critical regimes. Results obtained in the non-critical regime give rise to a highly efficient signal detection algorithm in the context of CDMA; while in the critical regime one observes a first-order transition line that ends in a continuous phase transition point. Finite size effects are also studied. © 2006 Elsevier B.V. All rights reserved.
Resumo:
It has been argued that a single two-dimensional visualization plot may not be sufficient to capture all of the interesting aspects of complex data sets, and therefore a hierarchical visualization system is desirable. In this paper we extend an existing locally linear hierarchical visualization system PhiVis (Bishop98a) in several directions: 1. We allow for em non-linear projection manifolds. The basic building block is the Generative Topographic Mapping. 2. We introduce a general formulation of hierarchical probabilistic models consisting of local probabilistic models organized in a hierarchical tree. General training equations are derived, regardless of the position of the model in the tree. 3. Using tools from differential geometry we derive expressions for local directionalcurvatures of the projection manifold. Like PhiVis, our system is statistically principled and is built interactively in a top-down fashion using the EM algorithm. It enables the user to interactively highlight those data in the parent visualization plot which are captured by a child model.We also incorporate into our system a hierarchical, locally selective representation of magnification factors and directional curvatures of the projection manifolds. Such information is important for further refinement of the hierarchical visualization plot, as well as for controlling the amount of regularization imposed on the local models. We demonstrate the principle of the approach on a toy data set andapply our system to two more complex 12- and 19-dimensional data sets.
Resumo:
In human vision, the response to luminance contrast at each small region in the image is controlled by a more global process where suppressive signals are pooled over spatial frequency and orientation bands. But what rules govern summation among stimulus components within the suppressive pool? We addressed this question by extending a pedestal plus pattern mask paradigm to use a stimulus with up to three mask components: a vertical 1 c/deg pedestal, plus pattern masks made from either a grating (orientation = -45°) or a plaid (orientation = ±45°), with component spatial frequency of 3 c/deg. The overall contrast of both types of pattern mask was fixed at 20% (i.e., plaid component contrasts were 10%). We found that both of these masks transformed conventional dipper functions (threshold vs. pedestal contrast with no pattern mask) in exactly the same way: The dipper region was raised and shifted to the right, but the dipper handles superimposed. This equivalence of the two pattern masks indicates that contrast summation between the plaid components was perfectly linear prior to the masking stage. Furthermore, the pattern masks did not drive the detecting mechanism above its detection threshold because they did not abolish facilitation by the pedestal (Foley, 1994). Therefore, the pattern masking could not be attributed to within-channel masking, suggesting that linear summation of contrast signals takes place within a suppressive contrast gain pool. We present a quantitative model of the effects and discuss the implications for neurophysiological models of the process. © 2004 ARVO.
Spatial pattern analysis of beta-amyloid (A beta) deposits in Alzheimer disease by linear regression
Resumo:
The spatial patterns of discrete beta-amyloid (Abeta) deposits in brain tissue from patients with Alzheimer disease (AD) were studied using a statistical method based on linear regression, the results being compared with the more conventional variance/mean (V/M) method. Both methods suggested that Abeta deposits occurred in clusters (400 to <12,800 mu m in diameter) in all but 1 of the 42 tissues examined. In many tissues, a regular periodicity of the Abeta deposit clusters parallel to the tissue boundary was observed. In 23 of 42 (55%) tissues, the two methods revealed essentially the same spatial patterns of Abeta deposits; in 15 of 42 (36%), the regression method indicated the presence of clusters at a scale not revealed by the V/M method; and in 4 of 42 (9%), there was no agreement between the two methods. Perceived advantages of the regression method are that there is a greater probability of detecting clustering at multiple scales, the dimension of larger Abeta clusters can be estimated more accurately, and the spacing between the clusters may be estimated. However, both methods may be useful, with the regression method providing greater resolution and the V/M method providing greater simplicity and ease of interpretation. Estimates of the distance between regularly spaced Abeta clusters were in the range 2,200-11,800 mu m, depending on tissue and cluster size. The regular periodicity of Abeta deposit clusters in many tissues would be consistent with their development in relation to clusters of neurons that give rise to specific neuronal projections.
Resumo:
Multiple regression analysis is a complex statistical method with many potential uses. It has also become one of the most abused of all statistical procedures since anyone with a data base and suitable software can carry it out. An investigator should always have a clear hypothesis in mind before carrying out such a procedure and knowledge of the limitations of each aspect of the analysis. In addition, multiple regression is probably best used in an exploratory context, identifying variables that might profitably be examined by more detailed studies. Where there are many variables potentially influencing Y, they are likely to be intercorrelated and to account for relatively small amounts of the variance. Any analysis in which R squared is less than 50% should be suspect as probably not indicating the presence of significant variables. A further problem relates to sample size. It is often stated that the number of subjects or patients must be at least 5-10 times the number of variables included in the study.5 This advice should be taken only as a rough guide but it does indicate that the variables included should be selected with great care as inclusion of an obviously unimportant variable may have a significant impact on the sample size required.
Resumo:
1. The techniques associated with regression, whether linear or non-linear, are some of the most useful statistical procedures that can be applied in clinical studies in optometry. 2. In some cases, there may be no scientific model of the relationship between X and Y that can be specified in advance and the objective may be to provide a ‘curve of best fit’ for predictive purposes. In such cases, the fitting of a general polynomial type curve may be the best approach. 3. An investigator may have a specific model in mind that relates Y to X and the data may provide a test of this hypothesis. Some of these curves can be reduced to a linear regression by transformation, e.g., the exponential and negative exponential decay curves. 4. In some circumstances, e.g., the asymptotic curve or logistic growth law, a more complex process of curve fitting involving non-linear estimation will be required.
Resumo:
In this paper we examine the equilibrium states of finite amplitude flow in a horizontal fluid layer with differential heating between the two rigid boundaries. The solutions to the Navier-Stokes equations are obtained by means of a perturbation method for evaluating the Landau constants and through a Newton-Raphson iterative method that results from the Fourier expansion of the solutions that bifurcate above the linear stability threshold of infinitesimal disturbances. The results obtained from these two different methods of evaluating the convective flow are compared in the neighborhood of the critical Rayleigh number. We find that for small Prandtl numbers the discrepancy of the two methods is noticeable. © 2009 The Physical Society of Japan.
Resumo:
The work described in this thesis is directed towards the reduction of noise levels in the Hoover Turbopower upright vacuum cleaner. The experimental work embodies a study of such factors as the application of noise source identification techniques, investigation of the noise generating principles for each major source and evaluation of the noise reducing treatments. It was found that the design of the vacuum cleaner had not been optimised from the standpoint of noise emission. Important factors such as noise `windows', isolation of vibration at the source, panel rattle, resonances and critical speeds had not been considered. Therefore, a number of experimentally validated treatments are proposed. Their noise reduction benefit together with material and tooling costs are presented. The solutions to the noise problems were evaluated on a standard Turbopower and the sound power level of the cleaner was reduced from 87.5 dB(A) to 80.4 db(A) at a cost of 93.6 pence per cleaner.The designers' lack of experience in noise reduction was identified as one of the factors for the low priority given to noise during design of the cleaner. Consequently, the fundamentals of acoustics, principles of noise prediction and absorption and guidelines for good acoustical design were collated into a Handbook and circulated at Hoover plc.Mechanical variations during production of the motor and the cleaner were found to be important. These caused a vast spread in the noise levels of the cleaners. Subsequently, the manufacturing processes were briefly studied to identify their source and recommendations for improvement are made.Noise of a product is quality related and a high level of noise is considered to be a bad feature. This project suggested that the noise level be used constructively both as a test on the production line to identify cleaners above a certain noise level and also to promote the product by `designing' the characteristics of the sound so that the appliance is pleasant to the user. This project showed that good noise control principles should be implemented early in the design stage.As yet there are no mandatory noise limits or noise-labelling requirements for household appliances. However, the literature suggests that noise-labelling is likely in the near future and the requirement will be to display the A-weighted sound power level. However, the `noys' scale of perceived noisiness was found more appropriate to the rating of appliance noise both as it is linear and therefore, a sound level that seems twice as loud is twice the value in noys and also takes into consideration the presence of pure tones, which even in the absence of a high noise level can lead to annoyance.
Resumo:
In this paper we examine the equilibrium states of periodic finite amplitude flow in a horizontal channel with differential heating between the two rigid boundaries. The solutions to the Navier-Stokes equations are obtained by means of a perturbation method for evaluating the Landau coefficients and through a Newton-Raphson iterative method that results from the Fourier expansion of the solutions that bifurcate above the linear stability threshold of infini- tesimal disturbances. The results obtained from these two different methods of evaluating the convective flow are compared in the neighbourhood of the critical Rayleigh number. We find that for small Prandtl numbers the discrepancy of the two methods is noticeable.
Resumo:
Exploratory analysis of data seeks to find common patterns to gain insights into the structure and distribution of the data. In geochemistry it is a valuable means to gain insights into the complicated processes making up a petroleum system. Typically linear visualisation methods like principal components analysis, linked plots, or brushing are used. These methods can not directly be employed when dealing with missing data and they struggle to capture global non-linear structures in the data, however they can do so locally. This thesis discusses a complementary approach based on a non-linear probabilistic model. The generative topographic mapping (GTM) enables the visualisation of the effects of very many variables on a single plot, which is able to incorporate more structure than a two dimensional principal components plot. The model can deal with uncertainty, missing data and allows for the exploration of the non-linear structure in the data. In this thesis a novel approach to initialise the GTM with arbitrary projections is developed. This makes it possible to combine GTM with algorithms like Isomap and fit complex non-linear structure like the Swiss-roll. Another novel extension is the incorporation of prior knowledge about the structure of the covariance matrix. This extension greatly enhances the modelling capabilities of the algorithm resulting in better fit to the data and better imputation capabilities for missing data. Additionally an extensive benchmark study of the missing data imputation capabilities of GTM is performed. Further a novel approach, based on missing data, will be introduced to benchmark the fit of probabilistic visualisation algorithms on unlabelled data. Finally the work is complemented by evaluating the algorithms on real-life datasets from geochemical projects.
Resumo:
Exploratory analysis of petroleum geochemical data seeks to find common patterns to help distinguish between different source rocks, oils and gases, and to explain their source, maturity and any intra-reservoir alteration. However, at the outset, one is typically faced with (a) a large matrix of samples, each with a range of molecular and isotopic properties, (b) a spatially and temporally unrepresentative sampling pattern, (c) noisy data and (d) often, a large number of missing values. This inhibits analysis using conventional statistical methods. Typically, visualisation methods like principal components analysis are used, but these methods are not easily able to deal with missing data nor can they capture non-linear structure in the data. One approach to discovering complex, non-linear structure in the data is through the use of linked plots, or brushing, while ignoring the missing data. In this paper we introduce a complementary approach based on a non-linear probabilistic model. Generative topographic mapping enables the visualisation of the effects of very many variables on a single plot, while also dealing with missing data. We show how using generative topographic mapping also provides an optimal method with which to replace missing values in two geochemical datasets, particularly where a large proportion of the data is missing.
Resumo:
We present measurements on the non-linear temperature response of fibre Bragg gratings recorded in pure and trans-4-stilbenemethanol-doped polymethyl methacrylate (PMMA) holey fibres.
In-fiber linear polarizer based on UV-inscribed 45° tilted grating in polarization maintaining fiber
Resumo:
We report an in-fiber linear polarizer structured by UV-inscribing a 45° tilted fiber grating (TFG) into polarization maintaining (PM) fiber along its principal axis. The polarization extinction ratio (PER) achieved by a 48 mm long 45° TFG has reached 46 dB at 1550 nm and the overall PER is >40 dB over a 50 nm wavelength range. Such 45° TFG based polarizers have many advantages over conventional products, including low loss, low cost, simple fabrication process, and no physical modification to the fiber, thus offering high stability and capable of handling high power.
Resumo:
DUE TO COPYRIGHT RESTRICTIONS ONLY AVAILABLE FOR CONSULTATION AT ASTON UNIVERSITY LIBRARY AND INFORMATION SERVICES WITH PRIOR ARRANGEMENT