902 resultados para Texture géométrique
Resumo:
A vision based technique for non-rigid control is presented that can be used for animation and video game applications. The user grasps a soft, squishable object in front of a camera that can be moved and deformed in order to specify motion. Active Blobs, a non-rigid tracking technique is used to recover the position, rotation and non-rigid deformations of the object. The resulting transformations can be applied to a texture mapped mesh, thus allowing the user to control it interactively. Our use of texture mapping hardware allows us to make the system responsive enough for interactive animation and video game character control.
Resumo:
A method for deformable shape detection and recognition is described. Deformable shape templates are used to partition the image into a globally consistent interpretation, determined in part by the minimum description length principle. Statistical shape models enforce the prior probabilities on global, parametric deformations for each object class. Once trained, the system autonomously segments deformed shapes from the background, while not merging them with adjacent objects or shadows. The formulation can be used to group image regions based on any image homogeneity predicate; e.g., texture, color, or motion. The recovered shape models can be used directly in object recognition. Experiments with color imagery are reported.
Resumo:
An improved technique for 3D head tracking under varying illumination conditions is proposed. The head is modeled as a texture mapped cylinder. Tracking is formulated as an image registration problem in the cylinder's texture map image. To solve the registration problem in the presence of lighting variation and head motion, the residual error of registration is modeled as a linear combination of texture warping templates and orthogonal illumination templates. Fast and stable on-line tracking is then achieved via regularized, weighted least squares minimization of the registration error. The regularization term tends to limit potential ambiguities that arise in the warping and illumination templates. It enables stable tracking over extended sequences. Tracking does not require a precise initial fit of the model; the system is initialized automatically using a simple 2-D face detector. The only assumption is that the target is facing the camera in the first frame of the sequence. The warping templates are computed at the first frame of the sequence. Illumination templates are precomputed off-line over a training set of face images collected under varying lighting conditions. Experiments in tracking are reported.
Resumo:
We propose to investigate a model-based technique for encoding non-rigid object classes in terms of object prototypes. Objects from the same class can be parameterized by identifying shape and appearance invariants of the class to devise low-level representations. The approach presented here creates a flexible model for an object class from a set of prototypes. This model is then used to estimate the parameters of low-level representation of novel objects as combinations of the prototype parameters. Variations in the object shape are modeled as non-rigid deformations. Appearance variations are modeled as intensity variations. In the training phase, the system is presented with several example prototype images. These prototype images are registered to a reference image by a finite element-based technique called Active Blobs. The deformations of the finite element model to register a prototype image with the reference image provide the shape description or shape vector for the prototype. The shape vector for each prototype, is then used to warp the prototype image onto the reference image and obtain the corresponding texture vector. The prototype texture vectors, being warped onto the same reference image have a pixel by pixel correspondence with each other and hence are "shape normalized". Given sufficient number of prototypes that exhibit appropriate in-class variations, the shape and the texture vectors define a linear prototype subspace that spans the object class. Each prototype is a vector in this subspace. The matching phase involves the estimation of a set of combination parameters for synthesis of the novel object by combining the prototype shape and texture vectors. The strengths of this technique lie in the combined estimation of both shape and appearance parameters. This is in contrast with the previous approaches where shape and appearance parameters were estimated separately.
Resumo:
We investigated adaptive neural control of precision grip forces during object lifting. A model is presented that adjusts reactive and anticipatory grip forces to a level just above that needed to stabilize lifted objects in the hand. The model obeys priciples of cerebellar structure and function by using slip sensations as error signals to adapt phasic motor commands to tonic force generators associated with output synergies controlling grip aperture. The learned phasic commands are weight and texture-dependent. Simulations of the new curcuit model reproduce key aspects of experimental observations of force application. Over learning trials, the onset of grip force buildup comes to lead the load force buildup, and the rate-of-rise of grip force, but not load force, scales inversely with the friction of the gripped object.
Resumo:
How do humans rapidly recognize a scene? How can neural models capture this biological competence to achieve state-of-the-art scene classification? The ARTSCENE neural system classifies natural scene photographs by using multiple spatial scales to efficiently accumulate evidence for gist and texture. ARTSCENE embodies a coarse-to-fine Texture Size Ranking Principle whereby spatial attention processes multiple scales of scenic information, ranging from global gist to local properties of textures. The model can incrementally learn and predict scene identity by gist information alone and can improve performance through selective attention to scenic textures of progressively smaller size. ARTSCENE discriminates 4 landscape scene categories (coast, forest, mountain and countryside) with up to 91.58% correct on a test set, outperforms alternative models in the literature which use biologically implausible computations, and outperforms component systems that use either gist or texture information alone. Model simulations also show that adjacent textures form higher-order features that are also informative for scene recognition.
Resumo:
When we look at a scene, how do we consciously see surfaces infused with lightness and color at the correct depths? Random Dot Stereograms (RDS) probe how binocular disparity between the two eyes can generate such conscious surface percepts. Dense RDS do so despite the fact that they include multiple false binocular matches. Sparse stereograms do so even across large contrast-free regions with no binocular matches. Stereograms that define occluding and occluded surfaces lead to surface percepts wherein partially occluded textured surfaces are completed behind occluding textured surfaces at a spatial scale much larger than that of the texture elements themselves. Earlier models suggest how the brain detects binocular disparity, but not how RDS generate conscious percepts of 3D surfaces. A neural model predicts how the layered circuits of visual cortex generate these 3D surface percepts using interactions between visual boundary and surface representations that obey complementary computational rules.
Resumo:
An analysis of the reset of visual cortical circuits responsible for the binding or segmentation of visual features into coherent visual forms yields a model that explains properties of visual persistence. The reset mechanisms prevent massive smearing or visual percepts in response to rapidly moving images. The model simulates relationships among psychophysical data showing inverse relations of persistence to flash luminance and duration, greaterr persistence of illusory contours than real contours, a U-shaped temporal function for persistence of illusory contours, a reduction of persistence: due to adaptation with a stimulus of like orientation, an increase or persistence due to adaptation with a stimulus of perpendicular orientation, and an increase of persistence with spatial separation of a masking stimulus. The model suggests that a combination of habituative, opponent, and endstopping mechanisms prevent smearing and limit persistence. Earlier work with the model has analyzed data about boundary formation, texture segregation, shape-from-shading, and figure-ground separation. Thus, several types of data support each model mechanism and new predictions are made.
Resumo:
This study has considered the optimisation of granola breakfast cereal manufacturing processes by wet granulation and pneumatic conveying. Granola is an aggregated food product used as a breakfast cereal and in cereal bars. Processing of granola involves mixing the dry ingredients (typically oats, nuts, etc.) followed by the addition of a binder which can contain honey, water and/or oil. In this work, the design and operation of two parallel wet granulation processes to produce aggregate granola products were incorporated: a) a high shear mixing granulation process followed by drying/toasting in an oven. b) a continuous fluidised bed followed by drying/toasting in an oven. In high shear granulation the influence of process parameters on key granule aggregate quality attributes such as granule size distribution and textural properties of granola were investigated. The experimental results show that the impeller rotational speed is the single most important process parameter which influences granola physical and textural properties. After that binder addition rate and wet massing time also show significant impacts on granule properties. Increasing the impeller speed and wet massing time increases the median granule size while also presenting a positive correlation with density. The combination of high impeller speed and low binder addition rate resulted in granules with the highest levels of hardness and crispness. In the fluidised bed granulation process the effect of nozzle air pressure and binder spray rate on key aggregate quality attributes were studied. The experimental results show that a decrease in nozzle air pressure leads to larger in mean granule size. The combination of lowest nozzle air pressure and lowest binder spray rate results in granules with the highest levels of hardness and crispness. Overall, the high shear granulation process led to larger, denser, less porous and stronger (less likely to break) aggregates than the fluidised bed process. The study also examined the particle breakage of granola during pneumatic conveying produced by both the high shear granulation and the fluidised bed granulation process. Products were pneumatically conveyed in a purpose built conveying rig designed to mimic product conveying and packaging. Three different conveying rig configurations were employed; a straight pipe, a rig consisting two 45° bends and one with 90° bend. Particle breakage increases with applied pressure drop, and a 90° bend pipe results in more attrition for all conveying velocities relative to other pipe geometry. Additionally for the granules produced in the high shear granulator; those produced at the highest impeller speed, while being the largest also have the lowest levels of proportional breakage while smaller granules produced at the lowest impeller speed have the highest levels of breakage. This effect clearly shows the importance of shear history (during granule production) on breakage during subsequent processing. In terms of the fluidised bed granulation, there was no single operating parameter that was deemed to have a significant effect on breakage during subsequent conveying. Finally, a simple power law breakage model based on process input parameters was developed for both manufacturing processes. It was found suitable for predicting the breakage of granola breakfast cereal at various applied air velocities using a number of pipe configurations, taking into account shear histories.
Resumo:
Increased plasmin and plasminogen levels and elevated somatic cell counts (SCC) and polymorphonuclear leucocyte levels (PMN) were evident in late lactation milk. Compositional changes in these milks were associated with increased SCC. The quality of late lactation milks was related to nutritional status of herds, with milks from herds on a high plane of nutrition having composition and clotting properties similar to, or superior to, early-mid lactation milks. Nutritionally-deficient cows had elevated numbers of polymorphonuclear leucocytes (PMNs) in their milk, elevated plasmin levels and increased overall proteolytic activity. The dominant effect of plasmin on proteolysis in milks of low SCC was established. When present in elevated numbers, somatic cells and PMNs in particular had a more significant influence on the proteolysis of both raw and pasteurised milks than plasmin. PMN protease action on the caseins showed proteolysis products of two specific enzymes, cathepsin B and elastase, which were also shown in high SCC milk. Crude extracts of somatic cells had a high specificity on αs1-casein. Cheeses made from late lactation milks had increased breakdown of αs1-casein, suggestive of the action of somatic cell proteinases, which may be linked to textural defects in cheese. Late lactation cheeses also showed decreased production of small peptides and amino acids, the reason for which is unknown. Plasmin, which is elevated in activity in late lactation milk, accelerated the ripening of Gouda-type cheese, but was not associated with defects of texture or flavour. The retention of somatic cell enzymes in cheese curd was confirmed, and a potential role in production of bitter peptides identified. Cheeses made from milks containing high levels of PMNs had accelerated αs1-casein breakdown relative to cheeses made from low PMN milk of the same total SCC, consistent with the demonstrated action of PMN proteinases. The two types of cheese were determined significantly different by blind triangle testing.
Resumo:
The application of sourdough can improve texture, structure, nutritional value, staling rate and shelf life of wheat and gluten-free breads. These quality improvements are associated with the formation of organic acids, exopolysaccharides (EPS), aroma or antifungal compounds. Initially, the suitability of two lactic acid bacteria strains to serve as sourdough starters for buckwheat, oat, quinoa, sorghum and flours was investigated. Wheat flour was chosen as a reference. The obligate heterofermentative lactic acid bacterium (LAB) Weissella cibaria MG1 (Wc) formed the EPS dextran (a α-1,6-glucan) from sucrose in situ with a molecular size of 106 to 107 kDa. EPS formation in all breads was analysed using size exclusion chromatography and highest amounts were formed in buckwheat (4 g/ kg) and quinoa sourdough (3 g/ kg). The facultative heterofermentative Lactobacillus plantarum FST1.7 (Lp) was identified as strong acidifier and was chosen due to its ubiquitous presence in gluten-free as well as wheat sourdoughs (Vogelmann et al. 2009). Both Wc and Lp, showed highest total titratable acids in buckwheat (16.8 ml; 26.0 ml), teff (16.2 ml; 24.5 ml) and quinoa sourdoughs (26.4 ml; 35.3 ml) correlating with higher amounts of fermentable sugars and higher buffering capacities. Sourdough incorporation reduced the crumb hardness after five days of storage in buckwheat (Wc -111%), teff (Wc -39%) and wheat (Wc -206%; Lp -118%) sourdough breads. The rate of staling (N/ day) was reduced in buckwheat (Ctrl 8 N; Wc 3 N; Lp 6 N), teff (Ctrl 13 N; Wc 9 N; Lp 10 N) and wheat (Ctrl 5 N; Wc 1 N; Lp 2 N) sourdough breads. Bread dough softening upon Wc and Lp sourdough incorporation accounted for increased crumb porosity in buckwheat (+10.4%; +4.7), teff (+8.1%; +8.3%) and wheat sourdough breads (+8.7%; +6.4%). Weissella cibaria MG1 sourdough improved the aroma quality of wheat bread but had no impact on aroma of gluten-free breads. Microbial shelf life however, was not prolonged in any of the breads regardless of the starter culture used. Due to the high prevalence of insulin-dependent diabetes mellitus particular amongst coeliac patients, glycaemic control is of great (Berti et al. 2004). The in vitro starch digestibility of gluten-free breads with and without sourdough addition was analysed to predict the GI (pGI). Sourdough can decrease starch hydrolysis in vitro, due to formation of resistant starch and organic acids. Predicted GI of gluten-free control breads were significantly lower than for the reference white wheat bread (GI=100). Starch granule size was investigated with scanning electron microscopy and was significantly smaller in quinoa flour (<2 μm). This resulted in higher enzymatic susceptibility and hence higher pGI for quinoa bread (95). Lowest hydrolysis indexes for sorghum and teff control breads (72 and 74, respectively) correlate with higher gelatinisation peak temperatures (69°C and 71°C, respectively). Levels of resistant starch were not increased by addition of Weissella cibaria MG1 (weak acidifier) or Lactobacillus plantarum FST1.7 (strong acidifier). The pGI was significantly decreased for both wheat sourdough breads (Wc 85; Lp 76). Lactic acid can promote starch interactions with gluten hence decreasing starch susceptibility (Östman et al. 2002). For most gluten-free breads, the pGI was increased upon sourdough addition. Only sorghum and teff Lp sourdough breads (69 and 68, respectively) had significantly decreased pGI. Results suggest that the increase of starch hydrolysis in gluten-free breads was related to mechanism other than presence of organic acids and formation of resistant starch.
Resumo:
One problem in most three-dimensional (3D) scalar data visualization techniques is that they often overlook to depict uncertainty that comes with the 3D scalar data and thus fail to faithfully present the 3D scalar data and have risks which may mislead users’ interpretations, conclusions or even decisions. Therefore this thesis focuses on the study of uncertainty visualization in 3D scalar data and we seek to create better uncertainty visualization techniques, as well as to find out the advantages/disadvantages of those state-of-the-art uncertainty visualization techniques. To do this, we address three specific hypotheses: (1) the proposed Texture uncertainty visualization technique enables users to better identify scalar/error data, and provides reduced visual overload and more appropriate brightness than four state-of-the-art uncertainty visualization techniques, as demonstrated using a perceptual effectiveness user study. (2) The proposed Linked Views and Interactive Specification (LVIS) uncertainty visualization technique enables users to better search max/min scalar and error data than four state-of-the-art uncertainty visualization techniques, as demonstrated using a perceptual effectiveness user study. (3) The proposed Probabilistic Query uncertainty visualization technique, in comparison to traditional Direct Volume Rendering (DVR) methods, enables radiologists/physicians to better identify possible alternative renderings relevant to a diagnosis and the classification probabilities associated to the materials appeared on these renderings; this leads to improved decision support for diagnosis, as demonstrated in the domain of medical imaging. For each hypothesis, we test it by following/implementing a unified framework that consists of three main steps: the first main step is uncertainty data modeling, which clearly defines and generates certainty types of uncertainty associated to given 3D scalar data. The second main step is uncertainty visualization, which transforms the 3D scalar data and their associated uncertainty generated from the first main step into two-dimensional (2D) images for insight, interpretation or communication. The third main step is evaluation, which transforms the 2D images generated from the second main step into quantitative scores according to specific user tasks, and statistically analyzes the scores. As a result, the quality of each uncertainty visualization technique is determined.
Resumo:
Flavour release from food is determined by the binding of flavours to other food ingredients and the partition of flavour molecules among different phases. Food emulsions are used as delivery systems for food flavours, and tailored structuring in emulsions provides novel means to better control flavour release. The current study investigated four structured oil-in-water emulsions with structuring in the oil phase, oil-water interface, and water phase. Oil phase structuring was achieved by the formation of monoglyceride (MG) liquid crystals in the oil droplets (MG structured emulsions). Structured interface was created by the adsorption of a whey protein isolate (WPI)-pectin double layer at the interface (multilayer emulsion). Water phase structured emulsions referred to emulsion filled protein gels (EFP gels), where emulsion droplets were embedded in WPI gel network, and emulsions with maltodextrins (MDs) of different dextrose-equivalent (DE) values. Flavour compounds with different physicochemical properties were added into the emulsions, and flavour release (release rate, headspace concentration and air-emulsion partition coefficient) was described by GC headspace analysis. Emulsion structures, including crystalline structure, particle size, emulsion stability, rheology, texture, and microstructures, were characterized using differential scanning calorimetry and X-ray diffraction, light scattering, multisample analytical centrifuge, rheometry, texture analysis, and confocal laser scanning microscopy, respectively. In MG structured emulsions, MG self-assembled into liquid crystalline structures and stable β-form crystals were formed after 3 days of storage at 25 °C. The inclusion of MG crystals allowed tween 20 stabilized emulsions to present viscoelastic properties, and it made WPI stabilized emulsions more sensitive to the change of pH and NaCl concentrations. Flavour compounds in MG structured emulsions had lower initial headspace concentration and air-emulsion partition coefficients than those in unstructured emulsions. Flavour release can be modulated by changing MG content, oil content and oil type. WPI-pectin multilayer emulsions were stable at pH 5.0, 4.0, and 3.0, but they presented extensive creaming when subjected to salt solutions with NaCl ≥ 150 mM and mixed with artificial salivas. Increase of pH from 5.0 to 7.0 resulted in higher headspace concentration but unchanged release rate, and increase of NaCl concentration led to increased headspace concentration and release rate. The study also showed that salivas could trigger higher release of hydrophobic flavours and lower release of hydrophilic flavours. In EFP gels, increases in protein content and oil content contributed to gels with higher storage modulus and force at breaking. Flavour compounds had significantly reduced release rates and air-emulsion partition coefficients in the gels than the corresponding ungelled emulsions, and the reduction was in line with the increase of protein content. Gels with stronger gel network but lower oil content were prepared, and lower or unaffected release rates of the flavours were observed. In emulsions containing maltodextrins, water was frozen at a much lower temperature, and emulsion stability was greatly improved when subjected to freeze-thawing. Among different MDs, MD DE 6 offered the emulsion the highest stability. Flavours had lower air-emulsion partition coefficients in the emulsions with MDs than those in the emulsion without MD. Moreover, the involvement of MDs in the emulsions allowed most flavours had similar release profiles before and after freeze-thaw treatment. The present study provided information about different structured emulsions as delivery systems for flavour compounds, and on how food structure can be designed to modulate flavour release, which could be helpful in the development of functional foods with improved flavour profile.
Resumo:
Gabriel Urbain Fauré lived during one of the most exciting times in music history. Spanning a life of 79 years (1845-1924), he lived through the height of Romanticism and the experimental avant-garde techniques of the early 20th century. In Fauré's music, one can find traces of Chopin, Liszt, Mendelssohn, Debussy and Poulenc. One can even argue that Fauré presages Skryabin and Shostakovich. The late works of Gabriel Fauré, chiefly those composed after 1892, testify to the argument that Fauré holds an important position in the shift from tonal to atonal composition and should be counted among such transitional composers as Gustav Mahler, Claude Debussy, Erik Satie, Richard Strauss, and Ferruccio Busoni. Fauré's unique way of fashioning harmonic impetus of almost purely linear means, resulting in a synthesis of harmonic and melodic devices, led me to craft the term mélodoharmonique. This term refers to a contrapuntally motivated technique of composition, particularly in a secondary layer of musical texture, in which a component of harmonic progression (i.e. arpeggiation, broken chord, etc.) is fused with linear motivic or thematic development. This dissertation seeks to bring to public attention through exploration in lecture and recital format, certain works of Gabriel Fauré, written after 1892. The repertoire will be selected from works for solo piano and piano in collaboration with violin, violoncello, and voice, which support the notion of Fauré as a modernist deserving larger recognition for his influence in the transition to atonal music. The recital repertoire includes the following--Song Cycles: La bonne chanson, opus 61; La chanson d'Ève, opus 95; Le jardin clos, opus 106; Mirages, opus 113; L'horizon chimérique, opus 118; Piano Works: Prelude in G minor opus 103, No. 3; Prelude in E minor opus 103, No. 9; Eleventh Nocturne, opus 104, No.1; Thirteenth Nocturne, opus 119; Chamber Works: Second Violin Sonata, opus 108; First Violoncello Sonata, opus 109; Second Violoncello Sonata, opus 117.
Resumo:
The Interloping Beguiler is an nineteen-minute concerto in four movements for bass clarinet solo and orchestra. The title refers to the role of the solo instrument, which continually thrusts itself into the affairs of the orchestra, deceiving and diverting the members of the orchestra away from their task of performing a "serious" orchestral composition. The bass clarinet portrays a comical, cartoon-like character whose awkward, and sometimes goofy, interjections cause chaos. Attempts are made by various members of the orchestra, especially the horns, to regain control of the work, but the bass clarinet always succeeds in its distracting antics. By the final movement of the composition, the bass clarinet has propelled the work into a cartoon-like landscape of quickly changing textures, dissonant intervals, and overlapping themes. The first movement, Introduction, sets the serious tone of the music to follow, or so it would seem. The entrance of the bass clarinet immediately changes this texture with its out-of-rhythm alternations between high and low pitches. This gesture provides a glimpse into the personality of the bass clarinet, an instrument here to mislead the members of the orchestra. Deception truly begins in the second movement, The Interloping Initiates. The bass clarinet starts the movement with a driving theme and is immediately supported by the orchestra. As the movement progresses, the bass clarinet quickly begins altering the theme, making it more playful and cartoonish. A struggle ensues between the horns and the bass clarinet, with the bass clarinet catapulting the piece into a latin-inspired section. The struggle continues through to the end of the movement. The third movement, Calm, is exactly what the title suggests. A sectional form distinguishes this movement from the second movement. Throughout Calm, the bass clarinet behaves with decorum, except for very large melodic leaps. The seed of anarchy planted by the bass clarinet in the second movement comes to fruition in the final movement, The Beguiling Builds. Here, the bass clarinet sends the work into chaos with sections recalling Looney Tunes cartoons, Hollywood western music, and children's folk songs.