45 resultados para interval-valued fuzzy sets
Resumo:
We propose a computational method for the coupled simulation of a compressible flow interacting with a thin-shell structure undergoing large deformations. An Eulerian finite volume formulation is adopted for the fluid and a Lagrangian formulation based on subdivision finite elements is adopted for the shell response. The coupling between the fluid and the solid response is achieved via a novel approach based on level sets. The basic approach furnishes a general algorithm for coupling Lagrangian shell solvers with Cartesian grid based Eulerian fluid solvers. The efficiency and robustness of the proposed approach is demonstrated with a airbag deployment simulation. It bears emphasis that in the proposed approach the solid and the fluid components as well as their coupled interaction are considered in full detail and modeled with an equivalent level of fidelity without any oversimplifying assumptions or bias towards a particular physical aspect of the problem.
Resumo:
The long term goal of our work is to enable rapid prototyping design optimization to take place on geometries of arbitrary size in a spirit of a real time computer game. In recent papers we have reported the integration of a Level Set based geometry kernel with an octree-based cut-Cartesian mesh generator, RANS flow solver and post-processing all within a single piece of software - and all implemented in parallel with commodity PC clusters as the target. This work has shown that it is possible to eliminate all serial bottlenecks from the CED Process. This paper reports further progress towards our goal; in particular we report on the generation of viscous layer meshes to bridge the body to the flow across the cut-cells. The Level Set formulation, which underpins the geometry representation, is used as a natural mechanism to allow rapid construction of conformal layer meshes. The guiding principle is to construct the mesh which most closely approximates the body but remains solvable. This apparently novel approach is described and examples given.
Resumo:
This paper presents an incremental learning solution for Linear Discriminant Analysis (LDA) and its applications to object recognition problems. We apply the sufficient spanning set approximation in three steps i.e. update for the total scatter matrix, between-class scatter matrix and the projected data matrix, which leads an online solution which closely agrees with the batch solution in accuracy while significantly reducing the computational complexity. The algorithm yields an efficient solution to incremental LDA even when the number of classes as well as the set size is large. The incremental LDA method has been also shown useful for semi-supervised online learning. Label propagation is done by integrating the incremental LDA into an EM framework. The method has been demonstrated in the task of merging large datasets which were collected during MPEG standardization for face image retrieval, face authentication using the BANCA dataset, and object categorisation using the Caltech101 dataset. © 2010 Springer Science+Business Media, LLC.
Resumo:
The background to this review paper is research we have performed over recent years aimed at developing a simulation system capable of handling large scale, real world applications implemented in an end-to-end parallel, scalable manner. The particular focus of this paper is the use of a Level Set solid modeling geometry kernel within this parallel framework to enable automated design optimization without topological restrictions and on geometries of arbitrary complexity. Also described is another interesting application of Level Sets: their use in guiding the export of a body-conformal mesh from our basic cut-Cartesian background octree - mesh - this permits third party flow solvers to be deployed. As a practical demonstrations meshes of guaranteed quality are generated and flow-solved for a B747 in full landing configuration and an automated optimization is performed on a cooled turbine tip geometry. Copyright © 2009 by W.N.Dawes.
Resumo:
Product innovativeness is a primary contingent factor to be addressed for the development of flexible management for the front-end. However, due to complexity of this early phase of the innovation process, the definition of which attributes to customise is critical to support a contingent approach. Therefore, this study investigates front-end attributes that need to be customised to permit effective management for different degrees of innovation. To accomplish this aim, a literature review and five case studies were performed. The findings highlighted the front-end strategic and operational levels as factors influencing the front-end attributes related to product innovativeness. In conclusion, this study suggests that two front-end attributes should be customised: development activities and decision-making approach. Copyright © 2011 Inderscience Enterprises Ltd.
Resumo:
Choosing a project manager for a construction project—particularly, large projects—is a critical project decision. The selection process involves different criteria and should be in accordance with company policies and project specifications. Traditionally, potential candidates are interviewed and the most qualified are selected in compliance with company priorities and project conditions. Precise computing models that could take various candidates’ information into consideration and then pinpoint the most qualified person with a high degree of accuracy would be beneficial. On the basis of the opinions of experienced construction company managers, this paper, through presenting a fuzzy system, identifies the important criteria in selecting a project manager. The proposed fuzzy system is based on IF-THEN rules; a genetic algorithm improves the overall accuracy as well as the functions used by the fuzzy system to make initial estimates of the cluster centers for fuzzy c-means clustering. Moreover, a back-propagation neutral network method was used to train the system. The optimal measures of the inference parameters were identified by calculating the system’s output error and propagating this error within the system. After specifying the system parameters, the membership function parameters—which by means of clustering and projection were approximated—were tuned with the genetic algorithm. Results from this system in selecting project managers show its high capability in making high-quality personnel predictions
Resumo:
Humans have been shown to adapt to the temporal statistics of timing tasks so as to optimize the accuracy of their responses, in agreement with the predictions of Bayesian integration. This suggests that they build an internal representation of both the experimentally imposed distribution of time intervals (the prior) and of the error (the loss function). The responses of a Bayesian ideal observer depend crucially on these internal representations, which have only been previously studied for simple distributions. To study the nature of these representations we asked subjects to reproduce time intervals drawn from underlying temporal distributions of varying complexity, from uniform to highly skewed or bimodal while also varying the error mapping that determined the performance feedback. Interval reproduction times were affected by both the distribution and feedback, in good agreement with a performance-optimizing Bayesian observer and actor model. Bayesian model comparison highlighted that subjects were integrating the provided feedback and represented the experimental distribution with a smoothed approximation. A nonparametric reconstruction of the subjective priors from the data shows that they are generally in agreement with the true distributions up to third-order moments, but with systematically heavier tails. In particular, higher-order statistical features (kurtosis, multimodality) seem much harder to acquire. Our findings suggest that humans have only minor constraints on learning lower-order statistical properties of unimodal (including peaked and skewed) distributions of time intervals under the guidance of corrective feedback, and that their behavior is well explained by Bayesian decision theory.
Resumo:
The fundamental aim of clustering algorithms is to partition data points. We consider tasks where the discovered partition is allowed to vary with some covariate such as space or time. One approach would be to use fragmentation-coagulation processes, but these, being Markov processes, are restricted to linear or tree structured covariate spaces. We define a partition-valued process on an arbitrary covariate space using Gaussian processes. We use the process to construct a multitask clustering model which partitions datapoints in a similar way across multiple data sources, and a time series model of network data which allows cluster assignments to vary over time. We describe sampling algorithms for inference and apply our method to defining cancer subtypes based on different types of cellular characteristics, finding regulatory modules from gene expression data from multiple human populations, and discovering time varying community structure in a social network.
Resumo:
We introduce a characterization of contraction for bounded convex sets. For discrete-time multi-agent systems we provide an explicit upperbound on the rate of convergence to a consensus under the assumptions of contractiveness and (weak) connectedness (across an interval.) Convergence is shown to be exponential when either the system or the function characterizing the contraction is linear. Copyright © 2007 IFAC.