144 resultados para compression set


Relevância:

20.00% 20.00%

Publicador:

Resumo:

Chapter 20 Clustering User Data for User Modelling in the GUIDE Multi-modal Set- top Box PM Langdon and P. Biswas 20.1 ... It utilises advanced user modelling and simulation in conjunction with a single layer interface that permits a ...

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The dynamic compressive response of corrugated carbon-fibre reinforced epoxy sandwich cores has been investigated using a Kolsky-bar set-up. Compression at quasi-static rates up to v 0=200ms -1 have been tested on three different slenderness ratios of strut. High speed photography was used to capture the failure mechanisms and relate these to the measured axial compressive stress. Experiments show significant strength enhancement as the loading rate increases. Although material rate sensitivity accounts for some of this, it has been shown that the majority of the strength enhancement is due to inertial stabilisation of the core members. Inertial strength enhancement rises non-linearly with impact velocity. The largest gains are associated with a shift to buckle modes composed of 2-3 half sine waves. The loading rates tested within this study are similar to those that are expected when a sandwich core is compressed due to a blast event. © 2012 Elsevier Ltd.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The commercial far-range (>10m) infrastructure spatial data collection methods are not completely automated. They need significant amount of manual post-processing work and in some cases, the equipment costs are significant. This paper presents a method that is the first step of a stereo videogrammetric framework and holds the promise to address these issues. Under this method, video streams are initially collected from a calibrated set of two video cameras. For each pair of simultaneous video frames, visual feature points are detected and their spatial coordinates are then computed. The result, in the form of a sparse 3D point cloud, is the basis for the next steps in the framework (i.e., camera motion estimation and dense 3D reconstruction). A set of data, collected from an ongoing infrastructure project, is used to show the merits of the method. Comparison with existing tools is also shown, to indicate the performance differences of the proposed method in the level of automation and the accuracy of results.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

When searching for characteristic subpatterns in potentially noisy graph data, it appears self-evident that having multiple observations would be better than having just one. However, it turns out that the inconsistencies introduced when different graph instances have different edge sets pose a serious challenge. In this work we address this challenge for the problem of finding maximum weighted cliques. We introduce the concept of most persistent soft-clique. This is subset of vertices, that 1) is almost fully or at least densely connected, 2) occurs in all or almost all graph instances, and 3) has the maximum weight. We present a measure of clique-ness, that essentially counts the number of edge missing to make a subset of vertices into a clique. With this measure, we show that the problem of finding the most persistent soft-clique problem can be cast either as: a) a max-min two person game optimization problem, or b) a min-min soft margin optimization problem. Both formulations lead to the same solution when using a partial Lagrangian method to solve the optimization problems. By experiments on synthetic data and on real social network data we show that the proposed method is able to reliably find soft cliques in graph data, even if that is distorted by random noise or unreliable observations. Copyright 2012 by the author(s)/owner(s).

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In a wind-turbine gearbox, planet bearings exhibit a high failure rate and are considered as one of the most critical components. Development of efficient vibration based fault detection methods for these bearings requires a thorough understanding of their vibration signature. Much work has been done to study the vibration properties of healthy planetary gear sets and to identify fault frequencies in fixed-axis bearings. However, vibration characteristics of planetary gear sets containing localized planet bearing defects (spalls or pits) have not been studied so far. In this paper, we propose a novel analytical model of a planetary gear set with ring gear flexibility and localized bearing defects as two key features. The model is used to simulate the vibration response of a planetary system in the presence of a defective planet bearing with faults on inner or outer raceway. The characteristic fault signature of a planetary bearing defect is determined and sources of modulation sidebands are identified. The findings from this work will be useful to improve existing sensor placement strategies and to develop more sophisticated fault detection algorithms. Copyright © 2011 by ASME.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The brain encodes visual information with limited precision. Contradictory evidence exists as to whether the precision with which an item is encoded depends on the number of stimuli in a display (set size). Some studies have found evidence that precision decreases with set size, but others have reported constant precision. These groups of studies differed in two ways. The studies that reported a decrease used displays with heterogeneous stimuli and tasks with a short-term memory component, while the ones that reported constancy used homogeneous stimuli and tasks that did not require short-term memory. To disentangle the effects of heterogeneity and short-memory involvement, we conducted two main experiments. In Experiment 1, stimuli were heterogeneous, and we compared a condition in which target identity was revealed before the stimulus display with one in which it was revealed afterward. In Experiment 2, target identity was fixed, and we compared heterogeneous and homogeneous distractor conditions. In both experiments, we compared an optimal-observer model in which precision is constant with set size with one in which it depends on set size. We found that precision decreases with set size when the distractors are heterogeneous, regardless of whether short-term memory is involved, but not when it is homogeneous. This suggests that heterogeneity, not short-term memory, is the critical factor. In addition, we found that precision exhibits variability across items and trials, which may partly be caused by attentional fluctuations.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Accurate and efficient computation of the distance function d for a given domain is important for many areas of numerical modeling. Partial differential (e.g. HamiltonJacobi type) equation based distance function algorithms have desirable computational efficiency and accuracy. In this study, as an alternative, a Poisson equation based level set (distance function) is considered and solved using the meshless boundary element method (BEM). The application of this for shape topology analysis, including the medial axis for domain decomposition, geometric de-featuring and other aspects of numerical modeling is assessed. © 2011 Elsevier Ltd. All rights reserved.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The partially observable Markov decision process (POMDP) has been proposed as a dialogue model that enables automatic improvement of the dialogue policy and robustness to speech understanding errors. It requires, however, a large number of dialogues to train the dialogue policy. Gaussian processes (GP) have recently been applied to POMDP dialogue management optimisation showing an ability to substantially increase the speed of learning. Here, we investigate this further using the Bayesian Update of Dialogue State dialogue manager. We show that it is possible to apply Gaussian processes directly to the belief state, removing the need for a parametric policy representation. In addition, the resulting policy learns significantly faster while maintaining operational performance. © 2012 IEEE.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Looking for a target in a visual scene becomes more difficult as the number of stimuli increases. In a signal detection theory view, this is due to the cumulative effect of noise in the encoding of the distractors, and potentially on top of that, to an increase of the noise (i.e., a decrease of precision) per stimulus with set size, reflecting divided attention. It has long been argued that human visual search behavior can be accounted for by the first factor alone. While such an account seems to be adequate for search tasks in which all distractors have the same, known feature value (i.e., are maximally predictable), we recently found a clear effect of set size on encoding precision when distractors are drawn from a uniform distribution (i.e., when they are maximally unpredictable). Here we interpolate between these two extreme cases to examine which of both conclusions holds more generally as distractor statistics are varied. In one experiment, we vary the level of distractor heterogeneity; in another we dissociate distractor homogeneity from predictability. In all conditions in both experiments, we found a strong decrease of precision with increasing set size, suggesting that precision being independent of set size is the exception rather than the rule.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Composite structures exhibit many different failure mechanisms, but attempts to model composite failure frequently make a priori assumptions about the mechanism by which failure will occur. Wang et al. [1] conducted compressive tests on four configurations of composite specimen manufactured with out-of-plane waviness created by ply-drop defects. There were significantly different failures for each case. Detailed finite element models of these experiments were developed which include competing failure mechanisms. The model predictions correlate well with experimental results-both qualitatively (location of failure and shape of failed specimen) and quantitatively (failure load). The models are used to identify the progression of failure during the compressive tests, determine the critical failure mechanism for each configuration, and investigate the effect of cohesive parameters upon specimen strength. This modelling approach which includes multiple competing failure mechanisms can be applied to predict failure in situations where the failure mechanism is not known in advance. © 2013 Elsevier Ltd. All rights reserved.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The objective of this study was to examine the operating characteristics of a light duty multi cylinder compression ignition engine with regular gasoline fuel at low engine speed and load. The effects of fuel stratification by means of multiple injections as well as the sensitivity of auto-ignition and burn rate to intake pressure and temperature are presented. The measurements used in this study included gaseous emissions, filter smoke opacity and in-cylinder indicated information. It was found that stable, low emission operation was possible with raised intake manifold pressure and temperature, and that fuel stratification can lead to an increase in stability and a reduced reliance on increased temperature and pressure. It was also found that the auto-ignition delay sensitivity of gasoline to intake temperature and pressure was low within the operating window considered in this study. Nevertheless, the requirement for an increase of pressure, temperature and stratification in order to achieve auto-ignition time scales small enough for combustion in the engine was clear, using pump gasoline. Copyright © 2009 SAE International.