976 resultados para image set


Relevância:

20.00% 20.00%

Publicador:

Resumo:

The work presented in this paper involves the stochastic finite element analysis of composite-epoxy adhesive lap joints using Monte Carlo simulation. A set of composite adhesive lap joints were prepared and loaded till failure to obtain their strength. The peel and shear strain in the bond line region at different levels of load were obtained using digital image correlation (DIC). The corresponding stresses were computed assuming a plane strain condition. The finite element model was verified by comparing the numerical and experimental stresses. The stresses exhibited a similar behavior and a good correlation was obtained. Further, the finite element model was used to perform the stochastic analysis using Monte Carlo simulation. The parameters influencing stress distribution were provided as a random input variable and the resulting probabilistic variation of maximum peel and shear stresses were studied. It was found that the adhesive modulus and bond line thickness had significant influence on the maximum stress variation. While the adherend thickness had a major influence, the effect of variation in longitudinal and shear modulus on the stresses was found to be little. (C) 2014 Elsevier Ltd. All rights reserved.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The tetrablock, roughly speaking, is the set of all linear fractional maps that map the open unit disc to itself. A formal definition of this inhomogeneous domain is given below. This paper considers triples of commuting bounded operators (A,B,P) that have the tetrablock as a spectral set. Such a triple is named a tetrablock contraction. The motivation comes from the success of model theory in another inhomogeneous domain, namely, the symmetrized bidisc F. A pair of commuting bounded operators (S,P) with Gamma as a spectral set is called a Gamma-contraction, and always has a dilation. The two domains are related intricately as the Lemma 3.2 below shows. Given a triple (A, B, P) as above, we associate with it a pair (F-1, F-2), called its fundamental operators. We show that (A,B,P) dilates if the fundamental operators F-1 and F-2 satisfy certain commutativity conditions. Moreover, the dilation space is no bigger than the minimal isometric dilation space of the contraction P. Whether these commutativity conditions are necessary, too, is not known. what we have shown is that if there is a tetrablock isometric dilation on the minimal isometric dilation space of P. then those commutativity conditions necessarily get imposed on the fundamental operators. En route, we decipher the structure of a tetrablock unitary (this is the candidate as the dilation triple) and a tertrablock isometry (the restriction of a tetrablock unitary to a joint invariant sub-space). We derive new results about r-contractions and apply them to tetrablock contractions. The methods applied are motivated by 11]. Although the calculations are lengthy and more complicated, they beautifully reveal that the dilation depends on the mutual relationship of the two fundamental operators, so that certain conditions need to be satisfied. The question of whether all tetrablock contractions dilate or not is unresolved.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Results from interface shear tests on sand-geosynthetic interfaces are examined in light of surface roughness of the interacting geosynthetic material. Three different types of interface shear tests carried out in the frame of direct shear-test setup are compared to understand the effect of parameters like box fixity and symmetry on the interface shear characteristics. Formation of shear bands close to the interface is visualized in the tests and the bands are analyzed using image-segmentation techniques in MATLAB. A woven geotextile with moderate roughness and a geomembrane with minimal roughness are used in the tests. The effect of surface roughness of the geosynthetic material on the formation of shear bands, movement of sand particles, and interface shear parameters are studied and compared through visual observations, image analyses, and image-segmentation techniques.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Detailed pedofacies characterization along-with lithofacies investigations of the Mio-Pleistocene Siwalik sediments exposed in the Ramnagar sub-basin have been studied so as to elucidate variability in time and space of fluvial processes and the role of intra- and extra-basinal controls on fluvial sedimentation during the evolution of the Himalayan foreland basin (HFB). Dominance of multiple, moderately to strongly developed palaeosol assemblages during deposition of Lower Siwalik (similar to 12-10.8 Ma) sediments suggest that the HFB was marked by Upland set-up of Thomas et al. (2002). Activity of intra-basinal faults on the uplands and deposition of terminal fans at different times caused the development of multiple soils. Further, detailed pedofacies along-with lithofacies studies indicate prevalence of stable tectonic conditions and development of meandering streams with broad floodplains. However, the Middle Siwalik (similar to 10.8-4.92 Ma) sub-group is marked by multistoried sandstones and minor mudstone and mainly weakly developed palaeosols, indicating deposition by large braided rivers in the form of megafans in a Lowland set-up of Thomas et al. (2002). Significant change in nature and size of rivers from the Lower to Middle Siwalik at similar to 10 Ma is found almost throughout of the basin from Kohat Plateau (Pakistan) to Nepal because the Himalayan orogeny witnessed its greatest tectonic upheaval at this time leading to attainment of great heights by the Himalaya, intensification of the monsoon, development of large rivers systems and a high rate of sedimentation, hereby a major change from the Upland set-up to the Lowland set-up over major parts of the HFB. An interesting geomorphic environmental set-up prevailed in the Ramnagar sub-basin during deposition of the studied Upper Siwalik (similar to 4.92 to <1.68 Ma) sediments as observed from the degree of pedogenesis and the type of palaeosols. In general, the Upper Siwalik sub-group in the Ramnagar sub-basin is subdivided from bottom to top into the Purmandal sandstone (4.92-4.49 Ma), Nagrota (4.49-1.68 Ma) and Boulder Conglomerate (<1.68 Ma) formations on the basis of sedimentological characters and change in dominant lithology. Presence of mudstone, a few thin gravel beds and dominant sandstone lithology with weakly to moderately developed palaeosols in the Purmandal sandstone Fm. indicates deposition by shallow braided fluvial streams. The deposition of mudstone dominant Nagrota Fm. with moderately to some well developed palaeosols and a zone of gleyed palaeosols with laminated mudstones and thin sandstones took place in an environment marked by numerous small lakes, water-logged regions and small streams in an environment just south of the Piedmont zone, perhaps similar to what is happening presently in the Upland region/the Upper Gangetic plain. This area is locally called the `Trai region' (Pascoe, 1964). Deposition of Boulder Conglomerate Fm. took place by gravelly braided river system close to the Himalayan Ranges. Activity along the Main Boundary Fault led to progradation of these environments distal-ward and led to development of on the whole a coarsening upward sequence. (C) 2014 Elsevier B.V. All rights reserved.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

To perform super resolution of low resolution images, state-of-the-art methods are based on learning a pair of lowresolution and high-resolution dictionaries from multiple images. These trained dictionaries are used to replace patches in lowresolution image with appropriate matching patches from the high-resolution dictionary. In this paper we propose using a single common image as dictionary, in conjunction with approximate nearest neighbour fields (ANNF) to perform super resolution (SR). By using a common source image, we are able to bypass the learning phase and also able to reduce the dictionary from a collection of hundreds of images to a single image. By adapting recent developments in ANNF computation, to suit super-resolution, we are able to perform much faster and accurate SR than existing techniques. To establish this claim, we compare the proposed algorithm against various state-of-the-art algorithms, and show that we are able to achieve b etter and faster reconstruction without any training.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Representing images and videos in the form of compact codes has emerged as an important research interest in the vision community, in the context of web scale image/video search. Recently proposed Vector of Locally Aggregated Descriptors (VLAD), has been shown to outperform the existing retrieval techniques, while giving a desired compact representation. VLAD aggregates the local features of an image in the feature space. In this paper, we propose to represent the local features extracted from an image, as sparse codes over an over-complete dictionary, which is obtained by K-SVD based dictionary training algorithm. The proposed VLAD aggregates the residuals in the space of these sparse codes, to obtain a compact representation for the image. Experiments are performed over the `Holidays' database using SIFT features. The performance of the proposed method is compared with the original VLAD. The 4% increment in the mean average precision (mAP) indicates the better retrieval performance of the proposed sparse coding based VLAD.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

An iterative image reconstruction technique employing B-Spline potential function in a Bayesian framework is proposed for fluorescence microscopy images. B-splines are piecewise polynomials with smooth transition, compact support and are the shortest polynomial splines. Incorporation of the B-spline potential function in the maximum-a-posteriori reconstruction technique resulted in improved contrast, enhanced resolution and substantial background reduction. The proposed technique is validated on simulated data as well as on the images acquired from fluorescence microscopes (widefield, confocal laser scanning fluorescence and super-resolution 4Pi microscopy). A comparative study of the proposed technique with the state-of-art maximum likelihood (ML) and maximum-a-posteriori (MAP) with quadratic potential function shows its superiority over the others. B-Spline MAP technique can find applications in several imaging modalities of fluorescence microscopy like selective plane illumination microscopy, localization microscopy and STED. (C) 2015 Author(s).

Relevância:

20.00% 20.00%

Publicador:

Resumo:

We seldom mistake a closer object as being larger, even though its retinal image is bigger. One underlying mechanism could be to calculate the size of the retinal image relative to that of another nearby object. Here we set out to investigate whether single neurons in the monkey inferotemporal cortex (IT) are sensitive to the relative size of parts in a display. Each neuron was tested on shapes containing two parts that could be conjoined or spatially separated. Each shape was presented in four versions created by combining the two parts at each of two possible sizes. In this design, neurons sensitive to the absolute size of parts would show the greatest response modulation when both parts are scaled up, whereas neurons encoding relative size would show similar responses. Our main findings are that 1) IT neurons responded similarly to all four versions of a shape, but tuning tended to be more consistent between versions with proportionately scaled parts; 2) in a subpopulation of cells, we observed interactions that resulted in similar responses to proportionately scaled parts; 3) these interactions developed together with sensitivity to absolute size for objects with conjoined parts but developed slightly later for objects with spatially separate parts. Taken together, our results demonstrate for the first time that there is a subpopulation of neurons in IT that encodes the relative size of parts in a display, forming a potential neural substrate for size constancy.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Facial emotions are the most expressive way to display emotions. Many algorithms have been proposed which employ a particular set of people (usually a database) to both train and test their model. This paper focuses on the challenging task of database independent emotion recognition, which is a generalized case of subject-independent emotion recognition. The emotion recognition system employed in this work is a Meta-Cognitive Neuro-Fuzzy Inference System (McFIS). McFIS has two components, a neuro-fuzzy inference system, which is the cognitive component and a self-regulatory learning mechanism, which is the meta-cognitive component. The meta-cognitive component, monitors the knowledge in the neuro-fuzzy inference system and decides on what-to-learn, when-to-learn and how-to-learn the training samples, efficiently. For each sample, the McFIS decides whether to delete the sample without being learnt, use it to add/prune or update the network parameter or reserve it for future use. This helps the network avoid over-training and as a result improve its generalization performance over untrained databases. In this study, we extract pixel based emotion features from well-known (Japanese Female Facial Expression) JAFFE and (Taiwanese Female Expression Image) TFEID database. Two sets of experiment are conducted. First, we study the individual performance of both databases on McFIS based on 5-fold cross validation study. Next, in order to study the generalization performance, McFIS trained on JAFFE database is tested on TFEID and vice-versa. The performance The performance comparison in both experiments against SVNI classifier gives promising results.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In this paper, we propose a super resolution (SR) method for synthetic images using FeatureMatch. Existing state-of-the-art super resolution methods are learning based methods, where a pair of low-resolution and high-resolution dictionary pair are trained, and this trained pair is used to replace patches in low-resolution image with appropriate matching patches from the high-resolution dictionary. In this paper, we show that by using Approximate Nearest Neighbour Fields (ANNF), and a common source image, we can by-pass the learning phase, and use a single image for dictionary. Thus, reducing the dictionary from a collection obtained from hundreds of training images, to a single image. We show that by modifying the latest developments in ANNF computation, to suit super resolution, we can perform much faster and more accurate SR than existing techniques. To establish this claim we will compare our algorithm against various state-of-the-art algorithms, and show that we are able to achieve better and faster reconstruction without any training phase.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Rapid reconstruction of multidimensional image is crucial for enabling real-time 3D fluorescence imaging. This becomes a key factor for imaging rapidly occurring events in the cellular environment. To facilitate real-time imaging, we have developed a graphics processing unit (GPU) based real-time maximum a-posteriori (MAP) image reconstruction system. The parallel processing capability of GPU device that consists of a large number of tiny processing cores and the adaptability of image reconstruction algorithm to parallel processing (that employ multiple independent computing modules called threads) results in high temporal resolution. Moreover, the proposed quadratic potential based MAP algorithm effectively deconvolves the images as well as suppresses the noise. The multi-node multi-threaded GPU and the Compute Unified Device Architecture (CUDA) efficiently execute the iterative image reconstruction algorithm that is similar to 200-fold faster (for large dataset) when compared to existing CPU based systems. (C) 2015 Author(s). All article content, except where otherwise noted, is licensed under a Creative Commons Attribution 3.0 Unported License.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

It has been shown earlier1] that the relaxed force constants (RFCs) could be used as a measure of bond strength only when the bonds form a part of the complete valence internal coordinates (VIC) basis. However, if the bond is not a part of the complete VIC basis, its RFC is not necessarily a measure of bond strength. Sometimes, it is possible to have a complete VIC basis that does not contain the intramolecular hydrogen bond (IMHB) as part of the basis. This means the RFC of IMHB is not necessarily a measure of bond strength. However, we know that IMHB is a weak bond and hence its RFC has to be a measure of bond strength. We resolve this problem of IMHB not being part of the complete basis by postulating `equivalent' basis sets where IMHB is part of the basis at least in one of the equivalent sets of VIC. As long as a given IMHB appears in one of the equivalent complete VIC basis sets, its RFC could be used as a measure of bond strength parameter.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Viral capsids derived from an icosahedral plant virus widely used in physical and nanotechnological investigations were fully dissociated into dimers by a rapid change of pH. The process was probed in vitro at high spatiotemporal resolution by time-resolved small-angle X-ray scattering using a high brilliance synchrotron source. A powerful custom-made global fitting algorithm allowed us to reconstruct the most likely pathway parametrized by a set of stoichiometric coefficients and to determine the shape of two successive intermediates by ab initio calculations. None of these two unexpected intermediates was previously identified in self-assembly experiments, which suggests that the disassembly pathway is not a mirror image of the assembly pathway. These findings shed new light on the mechanisms and the reversibility of the assembly/disassembly of natural and synthetic virus-based systems. They also demonstrate that both the structure and dynamics of an increasing number of intermediate species become accessible to experiments.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In big data image/video analytics, we encounter the problem of learning an over-complete dictionary for sparse representation from a large training dataset, which cannot be processed at once because of storage and computational constraints. To tackle the problem of dictionary learning in such scenarios, we propose an algorithm that exploits the inherent clustered structure of the training data and make use of a divide-and-conquer approach. The fundamental idea behind the algorithm is to partition the training dataset into smaller clusters, and learn local dictionaries for each cluster. Subsequently, the local dictionaries are merged to form a global dictionary. Merging is done by solving another dictionary learning problem on the atoms of the locally trained dictionaries. This algorithm is referred to as the split-and-merge algorithm. We show that the proposed algorithm is efficient in its usage of memory and computational complexity, and performs on par with the standard learning strategy, which operates on the entire data at a time. As an application, we consider the problem of image denoising. We present a comparative analysis of our algorithm with the standard learning techniques that use the entire database at a time, in terms of training and denoising performance. We observe that the split-and-merge algorithm results in a remarkable reduction of training time, without significantly affecting the denoising performance.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The 3-Hitting Set problem involves a family of subsets F of size at most three over an universe U. The goal is to find a subset of U of the smallest possible size that intersects every set in F. The version of the problem with parity constraints asks for a subset S of size at most k that, in addition to being a hitting set, also satisfies certain parity constraints on the sizes of the intersections of S with each set in the family F. In particular, an odd (even) set is a hitting set that hits every set at either one or three (two) elements, and a perfect code is a hitting set that intersects every set at exactly one element. These questions are of fundamental interest in many contexts for general set systems. Just as for Hitting Set, we find these questions to be interesting for the case of families consisting of sets of size at most three. In this work, we initiate an algorithmic study of these problems in this special case, focusing on a parameterized analysis. We show, for each problem, efficient fixed-parameter tractable algorithms using search trees that are tailor-made to the constraints in question, and also polynomial kernels using sunflower-like arguments in a manner that accounts for equivalence under the additional parity constraints.