935 resultados para Analysis Tools


Relevância:

70.00% 70.00%

Publicador:

Resumo:

Copyright © (2014) by the International Machine Learning Society (IMLS) All rights reserved. Classical methods such as Principal Component Analysis (PCA) and Canonical Correlation Analysis (CCA) are ubiquitous in statistics. However, these techniques are only able to reveal linear re-lationships in data. Although nonlinear variants of PCA and CCA have been proposed, these are computationally prohibitive in the large scale. In a separate strand of recent research, randomized methods have been proposed to construct features that help reveal nonlinear patterns in data. For basic tasks such as regression or classification, random features exhibit little or no loss in performance, while achieving drastic savings in computational requirements. In this paper we leverage randomness to design scalable new variants of nonlinear PCA and CCA; our ideas extend to key multivariate analysis tools such as spectral clustering or LDA. We demonstrate our algorithms through experiments on real- world data, on which we compare against the state-of-the-art. A simple R implementation of the presented algorithms is provided.

Relevância:

70.00% 70.00%

Publicador:

Resumo:

There is a requirement for better integration between design and analysis tools, which is difficult due to their different objectives, separate data representations and workflows. Currently, substantial effort is required to produce a suitable analysis model from design geometry. Robust links are required between these different representations to enable analysis attributes to be transferred between different design and analysis packages for models at various levels of fidelity.

This paper describes a novel approach for integrating design and analysis models by identifying and managing the relationships between the different representations. Three key technologies, Cellular Modeling, Virtual Topology and Equivalencing, have been employed to achieve effective simulation model management. These technologies and their implementation are discussed in detail. Prototype automated tools are introduced demonstrating how multiple simulation models can be linked and maintained to facilitate seamless integration throughout the design cycle.

Relevância:

70.00% 70.00%

Publicador:

Resumo:

The Groove Gap Waveguide (GGW) shows a behavior similar to the classical rectangular waveguide (RWG), but it is formed by two pieces which do not require metal contact. This feature suggests the GGW as a suitable alternative to the RGW for mm-wave frequencies, where ensuring the proper metal contact according to the wavelength size results challenging. Nevertheless, there is a lack of effective analysis tools for the complex GGW topology, and assuming a direct equivalence between the RGW and the GGW is too rough, so that dilatory full-wave simulations are required. This work presents a fast analysis method based on transmission line theory, which establishes the proper correspondence between the GGW and the RWG. In addition, below cutoff behavior of the GGW is studied for the first time. Several numerical tests and two manufactured prototypes validate the proposed method, which seems very adequate to optimize future GGW structures.

Relevância:

70.00% 70.00%

Publicador:

Resumo:

Introduction: Coordination is a strategy chosen by the central nervous system to control the movements and maintain stability during gait. Coordinated multi-joint movements require a complex interaction between nervous outputs, biomechanical constraints, and pro-prioception. Quantitatively understanding and modeling gait coordination still remain a challenge. Surgeons lack a way to model and appreciate the coordination of patients before and after surgery of the lower limbs. Patients alter their gait patterns and their kinematic synergies when they walk faster or slower than normal speed to maintain their stability and minimize the energy cost of locomotion. The goal of this study was to provide a dynamical system approach to quantitatively describe human gait coordination and apply it to patients before and after total knee arthroplasty. Methods: A new method of quantitative analysis of interjoint coordination during gait was designed, providing a general model to capture the whole dynamics and showing the kinematic synergies at various walking speeds. The proposed model imposed a relationship among lower limb joint angles (hips and knees) to parameterize the dynamics of locomotion of each individual. An integration of different analysis tools such as Harmonic analysis, Principal Component Analysis, and Artificial Neural Network helped overcome high-dimensionality, temporal dependence, and non-linear relationships of the gait patterns. Ten patients were studied using an ambulatory gait device (Physilog®). Each participant was asked to perform two walking trials of 30m long at 3 different speeds and to complete an EQ-5D questionnaire, a WOMAC and Knee Society Score. Lower limbs rotations were measured by four miniature angular rate sensors mounted respectively, on each shank and thigh. The outcomes of the eight patients undergoing total knee arthroplasty, recorded pre-operatively and post-operatively at 6 weeks, 3 months, 6 months and 1 year were compared to 2 age-matched healthy subjects. Results: The new method provided coordination scores at various walking speeds, ranged between 0 and 10. It determined the overall coordination of the lower limbs as well as the contribution of each joint to the total coordination. The difference between the pre-operative and post-operative coordination values were correlated with the improvements of the subjective outcome scores. Although the study group was small, the results showed a new way to objectively quantify gait coordination of patients undergoing total knee arthroplasty, using only portable body-fixed sensors. Conclusion: A new method for objective gait coordination analysis has been developed with very encouraging results regarding the objective outcome of lower limb surgery.

Relevância:

70.00% 70.00%

Publicador:

Resumo:

The purpose of this study was to examine the disability discourses present in Ontario elementary schools curriculum. The study used a critical social analysis perspective to employ a textual discourse analysis on the Planning [title of subject] Programs for Students with Special Education Needs (PPSSEN) section of the curriculum. The present study utilized Parker's (1992) seven criteria for distinguishing discourses and discovered five main discourses; Independent, dependent, legal, scientific and agency discourses. The second step to this research was the placement and discussion of these five discourses on three diverse texts, Paulo Freire's (2008) Pedagogy o/ the Oppressed, Psychiatry Inside Out, Selected writings of Franco Basaglia, written by Scheper-Huges and Lovell (1987) and Aronowitz and Giroux's (1985) Education Under Siege: The Conservative, Liberal and Radical Debate over Schooling. These unique perspectives were used as methods of analysis tools to further analyze the dominate disability discourses. The texts provided textual support in three major areas; dialectics, critical education and structural conditions of power and language of traditional roles and responsibilities. The findings and discussions presented in this project contain significant implications for anyone involved with students with disabilities in any education system.

Relevância:

70.00% 70.00%

Publicador:

Resumo:

Cette recherche sur les barrières à l’accès pour les pauvres atteints de maladies chroniques en Inde a trois objectifs : 1) évaluer si les buts, les objectifs, les instruments et la population visée, tels qu'ils sont formulés dans les politiques nationales actuelles de santé en Inde, permettent de répondre aux principales barrières à l’accès pour les pauvres atteints de maladies chroniques; 2) évaluer les types de leviers et les instruments identifiés par les politiques nationales de santé en Inde pour éliminer ces barrières à l’accès; 3) et évaluer si ces politiques se sont améliorées avec le temps à l’égard de l’offre de soins à la population pour les maladies chroniques et plus spécifiquement chez les pauvres. En utilisant le Framework Approach de Ritchie et Spencer (1993), une analyse qualitative de contenu a été complétée avec des politiques nationales de santé indiennes. Pour commencer, un cadre conceptuel sur les barrières à l’accès aux soins pour les pauvres atteints de maladies chroniques en Inde a été créé à partir d’une revue de la littérature scientifique. Par la suite, les politiques ont été échantillonnées en Inde en 2009. Un cadre thématique et un index ont été générés afin de construire les outils d’analyse et codifier le contenu. Finalement, les analyses ont été effectuées en utilisant cet index, en plus de chartes, de maps, d'une grille de questions et d'études de cas. L’analyse a tété effectuée en comparant les barrières à l’accès qui avaient été originalement identifiées dans le cadre thématique avec celles identifiées par l’analyse de contenu de chaque politique. Cette recherche met en évidence que les politiques nationales de santé indiennes s’attaquent à un certain nombre de barrières à l’accès pour les pauvres, notamment en ce qui a trait à l’amélioration des services de santé dans le secteur public, l’amélioration des connaissances de la population et l’augmentation de certaines interventions sur les maladies chroniques. D’un autre côté, les barrières à l’accès reliées aux coûts du traitement des maladies chroniques, le fait que les soins de santé primaires ne soient pas abordables pour beaucoup d’individus et la capacité des gens de payer sont, parmi les barrières à l'accès identifiées dans le cadre thématique, celles qui ont reçu le moins d’attention. De plus, lorsque l’on observe le temps de formulation de chaque politique, il semble que les efforts pour augmenter les interventions et l’offre de soins pour les maladies chroniques physiques soient plus récents. De plus, les pauvres ne sont pas ciblés par les actions reliées aux maladies chroniques. Le risque de les marginaliser davantage est important avec la transition économique, démographique et épidémiologique qui transforme actuellement le pays et la demande des services de santé.

Relevância:

70.00% 70.00%

Publicador:

Resumo:

The thesis has covered various aspects of modeling and analysis of finite mean time series with symmetric stable distributed innovations. Time series analysis based on Box and Jenkins methods are the most popular approaches where the models are linear and errors are Gaussian. We highlighted the limitations of classical time series analysis tools and explored some generalized tools and organized the approach parallel to the classical set up. In the present thesis we mainly studied the estimation and prediction of signal plus noise model. Here we assumed the signal and noise follow some models with symmetric stable innovations.We start the thesis with some motivating examples and application areas of alpha stable time series models. Classical time series analysis and corresponding theories based on finite variance models are extensively discussed in second chapter. We also surveyed the existing theories and methods correspond to infinite variance models in the same chapter. We present a linear filtering method for computing the filter weights assigned to the observation for estimating unobserved signal under general noisy environment in third chapter. Here we consider both the signal and the noise as stationary processes with infinite variance innovations. We derived semi infinite, double infinite and asymmetric signal extraction filters based on minimum dispersion criteria. Finite length filters based on Kalman-Levy filters are developed and identified the pattern of the filter weights. Simulation studies show that the proposed methods are competent enough in signal extraction for processes with infinite variance.Parameter estimation of autoregressive signals observed in a symmetric stable noise environment is discussed in fourth chapter. Here we used higher order Yule-Walker type estimation using auto-covariation function and exemplify the methods by simulation and application to Sea surface temperature data. We increased the number of Yule-Walker equations and proposed a ordinary least square estimate to the autoregressive parameters. Singularity problem of the auto-covariation matrix is addressed and derived a modified version of the Generalized Yule-Walker method using singular value decomposition.In fifth chapter of the thesis we introduced partial covariation function as a tool for stable time series analysis where covariance or partial covariance is ill defined. Asymptotic results of the partial auto-covariation is studied and its application in model identification of stable auto-regressive models are discussed. We generalize the Durbin-Levinson algorithm to include infinite variance models in terms of partial auto-covariation function and introduce a new information criteria for consistent order estimation of stable autoregressive model.In chapter six we explore the application of the techniques discussed in the previous chapter in signal processing. Frequency estimation of sinusoidal signal observed in symmetric stable noisy environment is discussed in this context. Here we introduced a parametric spectrum analysis and frequency estimate using power transfer function. Estimate of the power transfer function is obtained using the modified generalized Yule-Walker approach. Another important problem in statistical signal processing is to identify the number of sinusoidal components in an observed signal. We used a modified version of the proposed information criteria for this purpose.

Relevância:

70.00% 70.00%

Publicador:

Resumo:

Successful classification, information retrieval and image analysis tools are intimately related with the quality of the features employed in the process. Pixel intensities, color, texture and shape are, generally, the basis from which most of the features are Computed and used in such fields. This papers presents a novel shape-based feature extraction approach where an image is decomposed into multiple contours, and further characterized by Fourier descriptors. Unlike traditional approaches we make use of topological knowledge to generate well-defined closed contours, which are efficient signatures for image retrieval. The method has been evaluated in the CBIR context and image analysis. The results have shown that the multi-contour decomposition, as opposed to a single shape information, introduced a significant improvement in the discrimination power. (c) 2008 Elsevier B.V. All rights reserved,

Relevância:

70.00% 70.00%

Publicador:

Resumo:

When building a cost-effective high-performance parallel processing system, a performance model is a useful tool for exploring the design space and examining various parameters. However, performance analysis in such systems has proven to be a challenging task that requires the innovative performance analysis tools and methods to keep up with the rapid evolution and ever increasing complexity of such systems. To this end, we propose an analytical model for heterogeneous multi-cluster systems. The model takes into account stochastic quantities as well as network heterogeneity in bandwidth and latency in each cluster. Also, blocking and non-blocking network architecture model is proposed and are used in performance analysis of the system. The message latency is used as the primary performance metric. The model is validated by constructing a set of simulators to simulate different types of clusters, and by comparing the modeled results with the simulated ones.

Relevância:

70.00% 70.00%

Publicador:

Resumo:

The recent advances in CMOS technology have allowed for the fabrication of transistors with submicronic dimensions, making possible the integration of tens of millions devices in a single chip that can be used to build very complex electronic systems. Such increase in complexity of designs has originated a need for more efficient verification tools that could incorporate more appropriate physical and computational models. Timing verification targets at determining whether the timing constraints imposed to the design may be satisfied or not. It can be performed by using circuit simulation or by timing analysis. Although simulation tends to furnish the most accurate estimates, it presents the drawback of being stimuli dependent. Hence, in order to ensure that the critical situation is taken into account, one must exercise all possible input patterns. Obviously, this is not possible to accomplish due to the high complexity of current designs. To circumvent this problem, designers must rely on timing analysis. Timing analysis is an input-independent verification approach that models each combinational block of a circuit as a direct acyclic graph, which is used to estimate the critical delay. First timing analysis tools used only the circuit topology information to estimate circuit delay, thus being referred to as topological timing analyzers. However, such method may result in too pessimistic delay estimates, since the longest paths in the graph may not be able to propagate a transition, that is, may be false. Functional timing analysis, in turn, considers not only circuit topology, but also the temporal and functional relations between circuit elements. Functional timing analysis tools may differ by three aspects: the set of sensitization conditions necessary to declare a path as sensitizable (i.e., the so-called path sensitization criterion), the number of paths simultaneously handled and the method used to determine whether sensitization conditions are satisfiable or not. Currently, the two most efficient approaches test the sensitizability of entire sets of paths at a time: one is based on automatic test pattern generation (ATPG) techniques and the other translates the timing analysis problem into a satisfiability (SAT) problem. Although timing analysis has been exhaustively studied in the last fifteen years, some specific topics have not received the required attention yet. One such topic is the applicability of functional timing analysis to circuits containing complex gates. This is the basic concern of this thesis. In addition, and as a necessary step to settle the scenario, a detailed and systematic study on functional timing analysis is also presented.

Relevância:

70.00% 70.00%

Publicador:

Resumo:

Static analysis tools report software defects that may or may not be detected by other verification methods. Two challenges complicating the adoption of these tools are spurious false positive warnings and legitimate warnings that are not acted on. This paper reports automated support to help address these challenges using logistic regression models that predict the foregoing types of warnings from signals in the warnings and implicated code. Because examining many potential signaling factors in large software development settings can be expensive, we use a screening methodology to quickly discard factors with low predictive power and cost-effectively build predictive models. Our empirical evaluation indicates that these models can achieve high accuracy in predicting accurate and actionable static analysis warnings, and suggests that the models are competitive with alternative models built without screening.

Relevância:

70.00% 70.00%

Publicador:

Resumo:

Background: Although the molecular pathogenesis of pituitary adenomas has been assessed by several different techniques, it still remains partially unclear. Ribosomal proteins (RPs) have been recently related to human tumorigenesis, but they have not yet been evaluated in pituitary tumorigenesis. Objective: The aim of this study was to introduce serial analysis of gene expression (SAGE), a high-throughput method, in pituitary research in order to compare differential gene expression. Methods: Two SAGE cDNA libraries were constructed, one using a pool of mRNA obtained from five GH-secreting pituitary tumors and another from three normal pituitaries. Genes differentially expressed between the libraries were further validated by real-time PCR in 22 GH-secreting pituitary tumors and in 15 normal pituitaries. Results: Computer-generated genomic analysis tools identified 13 722 and 14 993 exclusive genes in normal and adenoma libraries respectively. Both shared 6497 genes, 2188 were underexpressed and 4309 overexpressed in tumoral library. In adenoma library, 33 genes encoding RPs were underexpressed. Among these, RPSA, RPS3, RPS14, and RPS29 were validated by real-time PCR. Conclusion: We report the first SAGE library from normal pituitary tissue and GH-secreting pituitary tumor, which provide quantitative assessment of cellular transcriptome. We also validated some downregulated genes encoding RPs. Altogether, the present data suggest that the underexpression of the studied RP genes possibly collaborates directly or indirectly with other genes to modify cell cycle arrest, DNA repair, and apoptosis, leading to an environment that might have a putative role in the tumorigenesis, introducing new perspectives for further studies on molecular genesis of somatotrophinomas.

Relevância:

70.00% 70.00%

Publicador:

Resumo:

The presented study carried out an analysis on rural landscape changes. In particular the study focuses on the understanding of driving forces acting on the rural built environment using a statistical spatial model implemented through GIS techniques. It is well known that the study of landscape changes is essential for a conscious decision making in land planning. From a bibliography review results a general lack of studies dealing with the modeling of rural built environment and hence a theoretical modelling approach for such purpose is needed. The advancement in technology and modernity in building construction and agriculture have gradually changed the rural built environment. In addition, the phenomenon of urbanization of a determined the construction of new volumes that occurred beside abandoned or derelict rural buildings. Consequently there are two types of transformation dynamics affecting mainly the rural built environment that can be observed: the conversion of rural buildings and the increasing of building numbers. It is the specific aim of the presented study to propose a methodology for the development of a spatial model that allows the identification of driving forces that acted on the behaviours of the building allocation. In fact one of the most concerning dynamic nowadays is related to an irrational expansion of buildings sprawl across landscape. The proposed methodology is composed by some conceptual steps that cover different aspects related to the development of a spatial model: the selection of a response variable that better describe the phenomenon under study, the identification of possible driving forces, the sampling methodology concerning the collection of data, the most suitable algorithm to be adopted in relation to statistical theory and method used, the calibration process and evaluation of the model. A different combination of factors in various parts of the territory generated favourable or less favourable conditions for the building allocation and the existence of buildings represents the evidence of such optimum. Conversely the absence of buildings expresses a combination of agents which is not suitable for building allocation. Presence or absence of buildings can be adopted as indicators of such driving conditions, since they represent the expression of the action of driving forces in the land suitability sorting process. The existence of correlation between site selection and hypothetical driving forces, evaluated by means of modeling techniques, provides an evidence of which driving forces are involved in the allocation dynamic and an insight on their level of influence into the process. GIS software by means of spatial analysis tools allows to associate the concept of presence and absence with point futures generating a point process. Presence or absence of buildings at some site locations represent the expression of these driving factors interaction. In case of presences, points represent locations of real existing buildings, conversely absences represent locations were buildings are not existent and so they are generated by a stochastic mechanism. Possible driving forces are selected and the existence of a causal relationship with building allocations is assessed through a spatial model. The adoption of empirical statistical models provides a mechanism for the explanatory variable analysis and for the identification of key driving variables behind the site selection process for new building allocation. The model developed by following the methodology is applied to a case study to test the validity of the methodology. In particular the study area for the testing of the methodology is represented by the New District of Imola characterized by a prevailing agricultural production vocation and were transformation dynamic intensively occurred. The development of the model involved the identification of predictive variables (related to geomorphologic, socio-economic, structural and infrastructural systems of landscape) capable of representing the driving forces responsible for landscape changes.. The calibration of the model is carried out referring to spatial data regarding the periurban and rural area of the study area within the 1975-2005 time period by means of Generalised linear model. The resulting output from the model fit is continuous grid surface where cells assume values ranged from 0 to 1 of probability of building occurrences along the rural and periurban area of the study area. Hence the response variable assesses the changes in the rural built environment occurred in such time interval and is correlated to the selected explanatory variables by means of a generalized linear model using logistic regression. Comparing the probability map obtained from the model to the actual rural building distribution in 2005, the interpretation capability of the model can be evaluated. The proposed model can be also applied to the interpretation of trends which occurred in other study areas, and also referring to different time intervals, depending on the availability of data. The use of suitable data in terms of time, information, and spatial resolution and the costs related to data acquisition, pre-processing, and survey are among the most critical aspects of model implementation. Future in-depth studies can focus on using the proposed model to predict short/medium-range future scenarios for the rural built environment distribution in the study area. In order to predict future scenarios it is necessary to assume that the driving forces do not change and that their levels of influence within the model are not far from those assessed for the time interval used for the calibration.

Relevância:

70.00% 70.00%

Publicador:

Resumo:

The present article describes and analyses youth criminality in the city of Rosario, Argentina between the years 2003-2006. Key actors’ understandings of and responses to the conflict were investigated by means of semi-structured interviews, observations, discourse analysis of policy documents, analysis of secondary data, and draw heavily on the experience of the author, a citizen and youth worker of Rosario. The actors examined were the police, the local government, young delinquents and youth organisations. Youth criminality is analysed from a conflict transformation approach using conflict analysis tools. Whereas, the provincial police understand the issue as a delinquency problem, other actors perceive it as an expression of a wider urban social conflict between those that are “included” and those that are “excluded” and as one of the negative effects of globalisation processes. The results suggest that police responses addressing only direct violence are ineffective, even contributing to increased tensions and polarisation, whereas strategies addressing cultural and structural violence are more suitable for this type of social urban conflict. Finally, recommendations for local youth policy are proposed to facilitate participation and inclusion of youth and as a tool for peaceful conflict transformation.

Relevância:

70.00% 70.00%

Publicador:

Resumo:

This paper discusses some issues which arise in the dataflow analysis of constraint logic programming (CLP) languages. The basic technique applied is that of abstract interpretation. First, some types of optimizations possible in a number of CLP systems (including efficient parallelization) are presented and the information that has to be obtained at compile-time in order to be able to implement such optimizations is considered. Two approaches are then proposed and discussed for obtaining this information for a CLP program: one based on an analysis of a CLP metainterpreter using standard Prolog analysis tools, and a second one based on direct analysis of the CLP program. For the second approach an abstract domain which approximates groundness (also referred to as "definiteness") information (i.e. constraint to a single valué) and the related abstraction functions are presented.