310 resultados para Poodle Toy


Relevância:

10.00% 10.00%

Publicador:

Resumo:

A paralisia cerebral, doença não progressiva, compromete movimentos e postura. A fisioterapia atual volta-se para um tratamento holístico. Brincar proporciona desenvolvimento neuropsicomotor. O presente estudo tem como objetivos investigar a opinião de fisioterapeutas que atuam em neuropediatria sobre a utilização do brinquedo em sua prática clínica e verificar sua possível utilização em intervenções junto a crianças com paralisia cerebral. Utiliza-se inicialmente de questionário de opinião junto a 50 fisioterapeutas das diversas clínicas da Associação de Apoio a Criança com Deficiência, AACD - SP, verificando a utilização de brinquedos face aos diversos objetivos fisioterapeuticos; a seguir, realiza observação de 60 atendimentos, em fisioterapia aquática e de solo, de crianças com paralisia cerebral, identificando a utilização de cada categoria de brinquedo relativo ao objetivo terapêutico. Os dados obtidos no questionário revelaram em ordem decrescente utilização de: brinquedos sensório-motores 57,4%, para ganho de equilíbrio (E); 22,2% para coordenação motora (CM); 18,5% para aquisições posturais (AP) e 2% para relaxamento muscular (RM). Em relação aos jogos de faz-de-conta: 37% (E); 39% (AP) e 24% (CM).Para os jogos de regras: 54% (E); 35% (CM); 11% (AP). Com os jogos de montagem: 52% (CM); 24% (E); 24% (AP). Os dados da observação revelaram que os principais objetivos terapêuticos visados com utilização de brinquedos foram: alongamento, primeiro 10 ; fortalecimento muscular, equilíbrio e treino de marcha de 10 a 40 . Quanto à modalidade de brinquedo observada houve predomínio do faz de conta no início e no fim da sessão e das demais categorias no meio, de forma intercalada. Os dados da observação coincidiram com os do questionário revelando utilização sistemática de brinquedos com objetivos fisioterapeuticos.(AU)

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Minimization of a sum-of-squares or cross-entropy error function leads to network outputs which approximate the conditional averages of the target data, conditioned on the input vector. For classifications problems, with a suitably chosen target coding scheme, these averages represent the posterior probabilities of class membership, and so can be regarded as optimal. For problems involving the prediction of continuous variables, however, the conditional averages provide only a very limited description of the properties of the target variables. This is particularly true for problems in which the mapping to be learned is multi-valued, as often arises in the solution of inverse problems, since the average of several correct target values is not necessarily itself a correct value. In order to obtain a complete description of the data, for the purposes of predicting the outputs corresponding to new input vectors, we must model the conditional probability distribution of the target data, again conditioned on the input vector. In this paper we introduce a new class of network models obtained by combining a conventional neural network with a mixture density model. The complete system is called a Mixture Density Network, and can in principle represent arbitrary conditional probability distributions in the same way that a conventional neural network can represent arbitrary functions. We demonstrate the effectiveness of Mixture Density Networks using both a toy problem and a problem involving robot inverse kinematics.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

There is currently considerable interest in developing general non-linear density models based on latent, or hidden, variables. Such models have the ability to discover the presence of a relatively small number of underlying `causes' which, acting in combination, give rise to the apparent complexity of the observed data set. Unfortunately, to train such models generally requires large computational effort. In this paper we introduce a novel latent variable algorithm which retains the general non-linear capabilities of previous models but which uses a training procedure based on the EM algorithm. We demonstrate the performance of the model on a toy problem and on data from flow diagnostics for a multi-phase oil pipeline.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Latent variable models represent the probability density of data in a space of several dimensions in terms of a smaller number of latent, or hidden, variables. A familiar example is factor analysis which is based on a linear transformations between the latent space and the data space. In this paper we introduce a form of non-linear latent variable model called the Generative Topographic Mapping, for which the parameters of the model can be determined using the EM algorithm. GTM provides a principled alternative to the widely used Self-Organizing Map (SOM) of Kohonen (1982), and overcomes most of the significant limitations of the SOM. We demonstrate the performance of the GTM algorithm on a toy problem and on simulated data from flow diagnostics for a multi-phase oil pipeline.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Visualization has proven to be a powerful and widely-applicable tool the analysis and interpretation of data. Most visualization algorithms aim to find a projection from the data space down to a two-dimensional visualization space. However, for complex data sets living in a high-dimensional space it is unlikely that a single two-dimensional projection can reveal all of the interesting structure. We therefore introduce a hierarchical visualization algorithm which allows the complete data set to be visualized at the top level, with clusters and sub-clusters of data points visualized at deeper levels. The algorithm is based on a hierarchical mixture of latent variable models, whose parameters are estimated using the expectation-maximization algorithm. We demonstrate the principle of the approach first on a toy data set, and then apply the algorithm to the visualization of a synthetic data set in 12 dimensions obtained from a simulation of multi-phase flows in oil pipelines and to data in 36 dimensions derived from satellite images.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Latent variable models represent the probability density of data in a space of several dimensions in terms of a smaller number of latent, or hidden, variables. A familiar example is factor analysis which is based on a linear transformations between the latent space and the data space. In this paper we introduce a form of non-linear latent variable model called the Generative Topographic Mapping, for which the parameters of the model can be determined using the EM algorithm. GTM provides a principled alternative to the widely used Self-Organizing Map (SOM) of Kohonen (1982), and overcomes most of the significant limitations of the SOM. We demonstrate the performance of the GTM algorithm on a toy problem and on simulated data from flow diagnostics for a multi-phase oil pipeline.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The Generative Topographic Mapping (GTM) algorithm of Bishop et al. (1997) has been introduced as a principled alternative to the Self-Organizing Map (SOM). As well as avoiding a number of deficiencies in the SOM, the GTM algorithm has the key property that the smoothness properties of the model are decoupled from the reference vectors, and are described by a continuous mapping from a lower-dimensional latent space into the data space. Magnification factors, which are approximated by the difference between code-book vectors in SOMs, can therefore be evaluated for the GTM model as continuous functions of the latent variables using the techniques of differential geometry. They play an important role in data visualization by highlighting the boundaries between data clusters, and are illustrated here for both a toy data set, and a problem involving the identification of crab species from morphological data.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

We develop an approach for a sparse representation for Gaussian Process (GP) models in order to overcome the limitations of GPs caused by large data sets. The method is based on a combination of a Bayesian online algorithm together with a sequential construction of a relevant subsample of the data which fully specifies the prediction of the model. Experimental results on toy examples and large real-world datasets indicate the efficiency of the approach.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

It has been argued that a single two-dimensional visualization plot may not be sufficient to capture all of the interesting aspects of complex data sets, and therefore a hierarchical visualization system is desirable. In this paper we extend an existing locally linear hierarchical visualization system PhiVis ¸iteBishop98a in several directions: bf(1) We allow for em non-linear projection manifolds. The basic building block is the Generative Topographic Mapping. bf(2) We introduce a general formulation of hierarchical probabilistic models consisting of local probabilistic models organized in a hierarchical tree. General training equations are derived, regardless of the position of the model in the tree. bf(3) Using tools from differential geometry we derive expressions for local directional curvatures of the projection manifold. Like PhiVis, our system is statistically principled and is built interactively in a top-down fashion using the EM algorithm. It enables the user to interactively highlight those data in the parent visualization plot which are captured by a child model. We also incorporate into our system a hierarchical, locally selective representation of magnification factors and directional curvatures of the projection manifolds. Such information is important for further refinement of the hierarchical visualization plot, as well as for controlling the amount of regularization imposed on the local models. We demonstrate the principle of the approach on a toy data set and apply our system to two more complex 12- and 19-dimensional data sets.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

We analyse how the Generative Topographic Mapping (GTM) can be modified to cope with missing values in the training data. Our approach is based on an Expectation -Maximisation (EM) method which estimates the parameters of the mixture components and at the same time deals with the missing values. We incorporate this algorithm into a hierarchical GTM. We verify the method on a toy data set (using a single GTM) and a realistic data set (using a hierarchical GTM). The results show our algorithm can help to construct informative visualisation plots, even when some of the training points are corrupted with missing values.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

It has been argued that a single two-dimensional visualization plot may not be sufficient to capture all of the interesting aspects of complex data sets, and therefore a hierarchical visualization system is desirable. In this paper we extend an existing locally linear hierarchical visualization system PhiVis ¸iteBishop98a in several directions: bf(1) We allow for em non-linear projection manifolds. The basic building block is the Generative Topographic Mapping (GTM). bf(2) We introduce a general formulation of hierarchical probabilistic models consisting of local probabilistic models organized in a hierarchical tree. General training equations are derived, regardless of the position of the model in the tree. bf(3) Using tools from differential geometry we derive expressions for local directional curvatures of the projection manifold. Like PhiVis, our system is statistically principled and is built interactively in a top-down fashion using the EM algorithm. It enables the user to interactively highlight those data in the ancestor visualization plots which are captured by a child model. We also incorporate into our system a hierarchical, locally selective representation of magnification factors and directional curvatures of the projection manifolds. Such information is important for further refinement of the hierarchical visualization plot, as well as for controlling the amount of regularization imposed on the local models. We demonstrate the principle of the approach on a toy data set and apply our system to two more complex 12- and 18-dimensional data sets.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

An interactive hierarchical Generative Topographic Mapping (HGTM) ¸iteHGTM has been developed to visualise complex data sets. In this paper, we build a more general visualisation system by extending the HGTM visualisation system in 3 directions: bf (1) We generalize HGTM to noise models from the exponential family of distributions. The basic building block is the Latent Trait Model (LTM) developed in ¸iteKabanpami. bf (2) We give the user a choice of initializing the child plots of the current plot in either em interactive, or em automatic mode. In the interactive mode the user interactively selects ``regions of interest'' as in ¸iteHGTM, whereas in the automatic mode an unsupervised minimum message length (MML)-driven construction of a mixture of LTMs is employed. bf (3) We derive general formulas for magnification factors in latent trait models. Magnification factors are a useful tool to improve our understanding of the visualisation plots, since they can highlight the boundaries between data clusters. The unsupervised construction is particularly useful when high-level plots are covered with dense clusters of highly overlapping data projections, making it difficult to use the interactive mode. Such a situation often arises when visualizing large data sets. We illustrate our approach on a toy example and apply our system to three more complex real data sets.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

As a part of the Managing Uncertainty in Complex Models (MUCM) project, research at Aston University will develop methods for dimensionality reduction of the input and/or output spaces of models, as seen within the emulator framework. Towards this end this report describes a framework for generating toy datasets, whose underlying structure is understood, to facilitate early investigations of dimensionality reduction methods and to gain a deeper understanding of the algorithms employed, both in terms of how effective they are for given types of models / situations, and also their speed in applications and how this scales with various factors. The framework, which allows the evaluation of both screening and projection approaches to dimensionality reduction, is described. We also describe the screening and projection methods currently under consideration and present some preliminary results. The aim of this draft of the report is to solicit feedback from the project team on the dataset generation framework, the methods we propose to use, and suggestions for extensions that should be considered.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

It has been argued that a single two-dimensional visualization plot may not be sufficient to capture all of the interesting aspects of complex data sets, and therefore a hierarchical visualization system is desirable. In this paper we extend an existing locally linear hierarchical visualization system PhiVis (Bishop98a) in several directions: 1. We allow for em non-linear projection manifolds. The basic building block is the Generative Topographic Mapping. 2. We introduce a general formulation of hierarchical probabilistic models consisting of local probabilistic models organized in a hierarchical tree. General training equations are derived, regardless of the position of the model in the tree. 3. Using tools from differential geometry we derive expressions for local directionalcurvatures of the projection manifold. Like PhiVis, our system is statistically principled and is built interactively in a top-down fashion using the EM algorithm. It enables the user to interactively highlight those data in the parent visualization plot which are captured by a child model.We also incorporate into our system a hierarchical, locally selective representation of magnification factors and directional curvatures of the projection manifolds. Such information is important for further refinement of the hierarchical visualization plot, as well as for controlling the amount of regularization imposed on the local models. We demonstrate the principle of the approach on a toy data set andapply our system to two more complex 12- and 19-dimensional data sets.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In the bulge test, a sheet metal specimen is clamped over a circular hole in a die and formed into a bulge by the hydraulic pressure on one side of the specirnen. As the unsupported part of the specimen is deformed in this way, its area is increased, in other words, the material is generally stretched and its thickness generally decreased. The stresses causing this stretching action are the membrane stresses in the shell generated by the hydraulic pressure, in the same way as the rubber in a toy balloon is stretched by the membrane stresses caused by the air inside it. The bulge test is a widely used sheet metal test, to determine the "formability" of sheet materials. Research on this forming process (2)-(15)* has hitherto been almost exclusively confined to predicting the behaviour of the bulged specimen through the constitutive equations (stresses and strains in relation to displacements and shapes) and empirical work hardening characteristics of the material as determined in the tension test. In the present study the approach is reversed; the stresses and strains in the specimen are measured and determined from the geometry of the deformed shell. Thus, the bulge test can be used for determining the stress-strain relationship in the material under actual conditions in sheet metal forming processes. When sheet materials are formed by fluid pressure, the work-piece assumes an approximately spherical shape, The exact nature and magnitude of the deviation from the perfect sphere can be defined and measured by an index called prolateness. The distribution of prolateness throughout the workpiece at any particular stage of the forming process is of fundamental significance, because it determines the variation of the stress ratio on which the mode of deformation depends. It is found. that, before the process becomes unstable in sheet metal, the workpiece is exactly spherical only at the pole and at an annular ring. Between the pole and this annular ring the workpiece is more pointed than a sphere, and outside this ring, it is flatter than a sphere. In the forming of sheet materials, the stresses and hence the incremental strains, are closely related to the curvatures of the workpiece. This relationship between geometry and state of stress can be formulated quantitatively through prolateness. The determination of the magnitudes of prolateness, however, requires special techniques. The success of the experimental work is due to the technique of measuring the profile inclination of the meridional section very accurately. A travelling microscope, workshop protractor and surface plate are used for measurements of circumferential and meridional tangential strains. The curvatures can be calculated from geometry. If, however, the shape of the workpiece is expressed in terms of the current radial (r) and axial ( L) coordinates, it is very difficult to calculate the curvatures within an adequate degree of accuracy, owing to the double differentiation involved. In this project, a first differentiation is, in effect, by-passed by measuring the profile inclination directly and the second differentiation is performed in a round-about way, as explained in later chapters. The variations of the stresses in the workpiece thus observed have not, to the knowledge of the author, been reported experimentally. The static strength of shells to withstand fluid pressure and their buckling strength under concentrated loads, both depend on the distribution of the thickness. Thickness distribution can be controlled to a limited extent by changing the work hardening characteristics of the work material and by imposing constraints. A technique is provided in this thesis for determining accurately the stress distribution, on which the strains associated with thinning depend. Whether a problem of controlled thickness distribution is tackled by theory, or by experiments, or by both combined, the analysis in this thesis supplies the theoretical framework and some useful experimental techniques for the research applied to particular problems. The improvement of formability by allowing draw-in can also be analysed with the same theoretical and experimental techniques. Results on stress-strain relationships are usually represented by single stress-strain curves plotted either between one stress and one strain (as in the tension or compression tests) or between the effective stress and effective strain, as in tests on tubular specimens under combined tension, torsion and internal pressure. In this study, the triaxial stresses and strains are plotted simultaneously in triangular coordinates. Thus, both stress and strain are represented by vectors and the relationship between them by the relationship between two vector functions. From the results so obtained, conclusions are drawn on both the behaviour and the properties of the material in the bulge test. The stress ratios are generally equal to the strain-rate ratios (stress vectors collinear with incremental strain vectors) and the work-hardening characteristics, which apply only to the particular strain paths are deduced. Plastic instability of the material is generally considered to have been reached when the oil pressure has attained its maximum value so that further deformation occurs under a constant or lower pressure. It is found that the instability regime of deformation has already occurred long before the maximum pressure is attained. Thus, a new concept of instability is proposed, and for this criterion, instability can occur for any type of pressure growth curves.