843 resultados para Feature grouping
Resumo:
This paper presents a new, dynamic feature representation method for high value parts consisting of complex and intersecting features. The method first extracts features from the CAD model of a complex part. Then the dynamic status of each feature is established between various operations to be carried out during the whole manufacturing process. Each manufacturing and verification operation can be planned and optimized using the real conditions of a feature, thus enhancing accuracy, traceability and process control. The dynamic feature representation is complementary to the design models used as underlining basis in current CAD/CAM and decision support systems. © 2012 CIRP.
Resumo:
Most machine-learning algorithms are designed for datasets with features of a single type whereas very little attention has been given to datasets with mixed-type features. We recently proposed a model to handle mixed types with a probabilistic latent variable formalism. This proposed model describes the data by type-specific distributions that are conditionally independent given the latent space and is called generalised generative topographic mapping (GGTM). It has often been observed that visualisations of high-dimensional datasets can be poor in the presence of noisy features. In this paper we therefore propose to extend the GGTM to estimate feature saliency values (GGTMFS) as an integrated part of the parameter learning process with an expectation-maximisation (EM) algorithm. The efficacy of the proposed GGTMFS model is demonstrated both for synthetic and real datasets.
Resumo:
Principal component analysis (PCA) is well recognized in dimensionality reduction, and kernel PCA (KPCA) has also been proposed in statistical data analysis. However, KPCA fails to detect the nonlinear structure of data well when outliers exist. To reduce this problem, this paper presents a novel algorithm, named iterative robust KPCA (IRKPCA). IRKPCA works well in dealing with outliers, and can be carried out in an iterative manner, which makes it suitable to process incremental input data. As in the traditional robust PCA (RPCA), a binary field is employed for characterizing the outlier process, and the optimization problem is formulated as maximizing marginal distribution of a Gibbs distribution. In this paper, this optimization problem is solved by stochastic gradient descent techniques. In IRKPCA, the outlier process is in a high-dimensional feature space, and therefore kernel trick is used. IRKPCA can be regarded as a kernelized version of RPCA and a robust form of kernel Hebbian algorithm. Experimental results on synthetic data demonstrate the effectiveness of IRKPCA. © 2010 Taylor & Francis.
Resumo:
Recent advances in airborne Light Detection and Ranging (LIDAR) technology allow rapid and inexpensive measurements of topography over large areas. Airborne LIDAR systems usually return a 3-dimensional cloud of point measurements from reflective objects scanned by the laser beneath the flight path. This technology is becoming a primary method for extracting information of different kinds of geometrical objects, such as high-resolution digital terrain models (DTMs), buildings and trees, etc. In the past decade, LIDAR gets more and more interest from researchers in the field of remote sensing and GIS. Compared to the traditional data sources, such as aerial photography and satellite images, LIDAR measurements are not influenced by sun shadow and relief displacement. However, voluminous data pose a new challenge for automated extraction the geometrical information from LIDAR measurements because many raster image processing techniques cannot be directly applied to irregularly spaced LIDAR points. ^ In this dissertation, a framework is proposed to filter out information about different kinds of geometrical objects, such as terrain and buildings from LIDAR automatically. They are essential to numerous applications such as flood modeling, landslide prediction and hurricane animation. The framework consists of several intuitive algorithms. Firstly, a progressive morphological filter was developed to detect non-ground LIDAR measurements. By gradually increasing the window size and elevation difference threshold of the filter, the measurements of vehicles, vegetation, and buildings are removed, while ground data are preserved. Then, building measurements are identified from no-ground measurements using a region growing algorithm based on the plane-fitting technique. Raw footprints for segmented building measurements are derived by connecting boundary points and are further simplified and adjusted by several proposed operations to remove noise, which is caused by irregularly spaced LIDAR measurements. To reconstruct 3D building models, the raw 2D topology of each building is first extracted and then further adjusted. Since the adjusting operations for simple building models do not work well on 2D topology, 2D snake algorithm is proposed to adjust 2D topology. The 2D snake algorithm consists of newly defined energy functions for topology adjusting and a linear algorithm to find the minimal energy value of 2D snake problems. Data sets from urbanized areas including large institutional, commercial, and small residential buildings were employed to test the proposed framework. The results demonstrated that the proposed framework achieves a very good performance. ^
Resumo:
The relationship between noun incorporation (NI) and the agreement alternations that occur in such contexts (NI Transitivity Alternations) remains inadequately understood. Three interpretations of these alternations (Baker, Aranovich & Golluscio 2005; Mithun 1984; Rosen 1989) are shown to be undermined by foundational or mechanical issues. I propose a syntactic model, adopting Branigan's (2011) interpretation of NI as the result of “provocative” feature valuation, which triggers generation of a copy of the object that subsequently merges inside the verb. Provocation triggers a reflexive Refine operation that deletes duplicate features from chains, making them interpretable for Transfer. NI Transitivity Alternations result from variant deletion preferences exhibited during Refine. I argue that the NI contexts discussed (Generic NI, Partial NI and Double Object NI) result from different restrictions on phonetic and semantic identity in chain formation. This provides us with a consistent definition of NI Transitivity Alternations across contexts, as well as a new typology that distinguishes NI contexts, rather than incorporating languages.
Resumo:
Il lavoro di tesi concerne la progettazione di un contenitore asettico per liquidi. In particolare, consiste nella creazione di aree/finestre trasparenti, ricavate sulla superficie del contenitore, con la funzione di indicatore di livello del liquido. Gli step che hanno delineato il lavoro consistono in un'analisi brevettuale, studio dei materiali di produzione, verifica tecnica e strutturale, progettazione grafica e test di validazione dell'idea.
Resumo:
Publisher PDF
Resumo:
Aircraft manufacturing industries are looking for solutions in order to increase their productivity. One of the solutions is to apply the metrology systems during the production and assembly processes. Metrology Process Model (MPM) (Maropoulos et al, 2007) has been introduced which emphasises metrology applications with assembly planning, manufacturing processes and product designing. Measurability analysis is part of the MPM and the aim of this analysis is to check the feasibility for measuring the designed large scale components. Measurability Analysis has been integrated in order to provide an efficient matching system. Metrology database is structured by developing the Metrology Classification Model. Furthermore, the feature-based selection model is also explained. By combining two classification models, a novel approach and selection processes for integrated measurability analysis system (MAS) are introduced and such integrated MAS could provide much more meaningful matching results for the operators. © Springer-Verlag Berlin Heidelberg 2010.
Resumo:
The nanometer range structure produced by thin films of diblock copolymers makes them a great of interest as templates for the microelectronics industry. We investigated the effect of annealing solvents and/or mixture of the solvents in case of symmetric Poly (styrene-block-4vinylpyridine) (PS-b-P4VP) diblock copolymer to get the desired line patterns. In this paper, we used different molecular weights PS-b-P4VP to demonstrate the scalability of such high χ BCP system which requires precise fine-tuning of interfacial energies achieved by surface treatment and that improves the wetting property, ordering, and minimizes defect densities. Bare Silicon Substrates were also modified with polystyrene brush and ethylene glycol self-assembled monolayer in a simple quick reproducible way. Also, a novel and simple in situ hard mask technique was used to generate sub-7nm Iron oxide nanowires with a high aspect ratio on Silicon substrate, which can be used to develop silicon nanowires post pattern transfer.
Resumo:
Fitting statistical models is computationally challenging when the sample size or the dimension of the dataset is huge. An attractive approach for down-scaling the problem size is to first partition the dataset into subsets and then fit using distributed algorithms. The dataset can be partitioned either horizontally (in the sample space) or vertically (in the feature space), and the challenge arise in defining an algorithm with low communication, theoretical guarantees and excellent practical performance in general settings. For sample space partitioning, I propose a MEdian Selection Subset AGgregation Estimator ({\em message}) algorithm for solving these issues. The algorithm applies feature selection in parallel for each subset using regularized regression or Bayesian variable selection method, calculates the `median' feature inclusion index, estimates coefficients for the selected features in parallel for each subset, and then averages these estimates. The algorithm is simple, involves very minimal communication, scales efficiently in sample size, and has theoretical guarantees. I provide extensive experiments to show excellent performance in feature selection, estimation, prediction, and computation time relative to usual competitors.
While sample space partitioning is useful in handling datasets with large sample size, feature space partitioning is more effective when the data dimension is high. Existing methods for partitioning features, however, are either vulnerable to high correlations or inefficient in reducing the model dimension. In the thesis, I propose a new embarrassingly parallel framework named {\em DECO} for distributed variable selection and parameter estimation. In {\em DECO}, variables are first partitioned and allocated to m distributed workers. The decorrelated subset data within each worker are then fitted via any algorithm designed for high-dimensional problems. We show that by incorporating the decorrelation step, DECO can achieve consistent variable selection and parameter estimation on each subset with (almost) no assumptions. In addition, the convergence rate is nearly minimax optimal for both sparse and weakly sparse models and does NOT depend on the partition number m. Extensive numerical experiments are provided to illustrate the performance of the new framework.
For datasets with both large sample sizes and high dimensionality, I propose a new "divided-and-conquer" framework {\em DEME} (DECO-message) by leveraging both the {\em DECO} and the {\em message} algorithm. The new framework first partitions the dataset in the sample space into row cubes using {\em message} and then partition the feature space of the cubes using {\em DECO}. This procedure is equivalent to partitioning the original data matrix into multiple small blocks, each with a feasible size that can be stored and fitted in a computer in parallel. The results are then synthezied via the {\em DECO} and {\em message} algorithm in a reverse order to produce the final output. The whole framework is extremely scalable.
Resumo:
This paper explores city dweller aspirations for cities of the future in the context of global commitments to radically reduce carbon emissions by 2050; cities contribute the vast majority of these emissions and a growing bulk of theworld's population lives in cities. The particular challenge of creating a carbon reduced future in democratic countries is that the measures proposed must be acceptable to the electorate. Such acceptability is fostered if carbon reduced ways of living are also felt to bewellbeing maximising. Thus the objective of the paper is to explore what kinds of cities people aspire to live in, to ascertain whether these aspirations align with or undermine carbon reduced ways of living, as well as personal wellbeing. Using a novel free associative technique, city aspirations are found to cluster around seven themes, encompassing physical and social aspects. Physically, people aspire to a city with a range of services and facilities, green and blue spaces, efficient transport, beauty and good design. Socially, people aspire to a sense of community and a safe environment. An exploration of these themes reveals that only a minority of the participants' aspirations for cities relate to lowering carbon or environmental wellbeing. Far more consensual is emphasis on, and a particular vision of, aspirations that will bring personal wellbeing. Furthermore, city dweller aspirations align with evidence concerning factors that maximise personal wellbeing but, far less, with those that produce lowcarbonways of living. In order to shape a lower carbon future that city dwellers accept the potential convergence between environmental and personal wellbeing will need to be capitalised on: primarily aversion to pollution and enjoyment of communal green space.
Resumo:
This research paper presents a five step algorithm to generate tool paths for machining Free form / Irregular Contoured Surface(s) (FICS) by adopting STEP-NC (AP-238) format. In the first step, a parametrized CAD model with FICS is created or imported in UG-NX6.0 CAD package. The second step recognizes the features and calculates a Closeness Index (CI) by comparing them with the B-Splines / Bezier surfaces. The third step utilizes the CI and extracts the necessary data to formulate the blending functions for identified features. In the fourth step Z-level 5 axis tool paths are generated by adopting flat and ball end mill cutters. Finally, in the fifth step, tool paths are integrated with STEP-NC format and validated. All these steps are discussed and explained through a validated industrial component.