53 resultados para morphing


Relevância:

20.00% 20.00%

Publicador:

Resumo:

Durante i trattamenti radioterapici dei pazienti oncologici testa-collo, le ghiandole parotidee (PGs) possono essere indebitamente irradiate a seguito di modificazioni volumetriche-spaziali inter/intra-frazione causate da fattori quali il dimagrimento, l’esposizione a radiazioni ionizzanti ed il morphing anatomico degli organi coinvolti nelle aree d’irraggiamento. Il presente lavoro svolto presso la struttura di Fisica Medica e di Radioterapia Oncologica dell’A.O.U di Modena, quale parte del progetto di ricerca del Ministero della Salute (MoH2010, GR-2010-2318757) “ Dose warping methods for IGRT and Adaptive RT: dose accumulation based on organ motion and anatomical variations of the patients during radiation therapy treatments ”, sviluppa un modello biomeccanico in grado di rappresentare il processo di deformazione delle PGs, considerandone la geometria, le proprietà elastiche e l'evoluzione durante il ciclo terapeutico. Il modello di deformazione d’organo è stato realizzato attraverso l’utilizzo di un software agli elementi finiti (FEM). Molteplici superfici mesh, rappresentanti la geometria e l’evoluzione delle parotidi durante le sedute di trattamento, sono state create a partire dai contorni dell’organo definiti dal medico radioterapista sull’immagine tomografica di pianificazione e generati automaticamente sulle immagini di setup e re-positioning giornaliere mediante algoritmi di registrazione rigida/deformabile. I constraints anatomici e il campo di forze del modello sono stati definiti sulla base di ipotesi semplificative considerando l’alterazione strutturale (perdita di cellule acinari) e le barriere anatomiche dovute a strutture circostanti. L’analisi delle mesh ha consentito di studiare la dinamica della deformazione e di individuare le regioni maggiormente soggette a cambiamento. Le previsioni di morphing prodotte dal modello proposto potrebbero essere integrate in un treatment planning system per metodiche di Adaptive Radiation Therapy.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The length of wind turbine rotor blades has been increased during the last decades. Higher stresses arise especially at the blade root because of the longer lever arm. One way to reduce unsteady blade-root stresses caused by turbulence, gusts, or wind shear is to actively control the lift in the blade tip region. One promising method involves airfoils with morphing trailing edges to control the lift and consequently the loads acting on the blade. In the present study, the steady and unsteady behavior of an airfoil with a morphing trailing edge is investigated. Two-dimensional Reynolds-Averaged Navier-Stokes (RANS) simulations are performed for a typical thin wind turbine airfoil with a morphing trailing edge. Steady-state simulations are used to design optimal geometry, size, and deflection angles of the morphing trailing edge. The resulting steady aerodynamic coefficients are then analyzed at different angles of attack in order to determine the effectiveness of the morphing trailing edge. In order to investigate the unsteady aerodynamic behavior of the optimal morphing trailing edge, time-resolved RANS-simulations are performed using a deformable grid. In order to analyze the phase shift between the variable trailing edge deflection and the dynamic lift coefficient, the trailing edge is deflected at four different reduced frequencies for each different angle of attack. As expected, a phase shift between the deflection and the lift occurs. While deflecting the trailing edge at angles of attack near stall, additionally an overshoot above and beyond the steady lift coefficient is observed and evaluated.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This thesis describes a study conducted for the development of a new approach for the design of compliant mechanisms. Currently compliant mechanisms are based on a 2.5D design method. The applications for which compliant mechanisms can be used this way, is limited. The proposed research suggests to use a 3D approach for the design of CM’s, to better exploit its useful properties. To test the viability of this method, a practical application was chosen. The selected application is related to morphing wings. During this project a working prototype of a variable sweep and variable AoA system was designed and made for an SUAV. A compliant hinge allows the system to achieve two DOF. This hinge has been designed using the proposed 3D design approach. To validate the capabilities of the design, two methods were used. One of these methods was by simulation. By using analysis software, a basic idea could be provided of the stress and deformation of the designed mechanism. The second validation was done by means of AM. Using FDM and material jetting technologies, several prototypes were manufactured. The result of the first model showed that the DOF could be achieved. Models manufactured using material jetting technology, proved that the designed model could provide the desired motion and exploit the positive characteristics of CM. The system could be manufactured successfully in one part. Being able to produce the system in one part makes the need for an extensive assembly process redundant. This improves its structural quality. The materials chosen for the prototypes were PLA, VeroGray and Rigur. The material properties were suboptimal for its final purpose, but successful results were obtained. The prototypes proved tough and were able to provide the desired motion. This proves that the proposed design method can be a useful tool for the design of improved CM’s. Furthermore, the variable sweep & AoA system could be used to boost the flight performance of SUAV’s.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

L'obiettivo principale di questo lavoro di tesi è quello di migliorare gli algoritmi di morphing generation in termini di qualità visiva e di potenzialità di attacco dei sistemi automatici di riconoscimento facciale.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

As important social stimuli, faces playa critical role in our lives. Much of our interaction with other people depends on our ability to recognize faces accurately. It has been proposed that face processing consists of different stages and interacts with other systems (Bruce & Young, 1986). At a perceptual level, the initial two stages, namely structural encoding and face recognition, are particularly relevant and are the focus of this dissertation. Event-related potentials (ERPs) are averaged EEG signals time-locked to a particular event (such as the presentation of a face). With their excellent temporal resolution, ERPs can provide important timing information about neural processes. Previous research has identified several ERP components that are especially related to face processing, including the N 170, the P2 and the N250. Their nature with respect to the stages of face processing is still unclear, and is examined in Studies 1 and 2. In Study 1, participants made gender decisions on a large set of female faces interspersed with a few male faces. The ERP responses to facial characteristics of the female faces indicated that the N 170 amplitude from each side of the head was affected by information from eye region and by facial layout: the right N 170 was affected by eye color and by face width, while the left N 170 was affected by eye size and by the relation between the sizes of the top and bottom parts of a face. In contrast, the P100 and the N250 components were largely unaffected by facial characteristics. These results thus provided direct evidence for the link between the N 170 and structural encoding of faces. In Study 2, focusing on the face recognition stage, we manipulated face identity strength by morphing individual faces to an "average" face. Participants performed a face identification task. The effect of face identity strength was found on the late P2 and the N250 components: as identity strength decreased from an individual face to the "average" face, the late P2 increased and the N250 decreased. In contrast, the P100, the N170 and the early P2 components were not affected by face identity strength. These results suggest that face recognition occurs after 200 ms, but not earlier. Finally, because faces are often associated with social information, we investigated in Study 3 how group membership might affect ERP responses to faces. After participants learned in- and out-group memberships of the face stimuli based on arbitrarily assigned nationality and university affiliation, we found that the N170 latency differentiated in-group and out-group faces, taking longer to process the latter. In comparison, without group memberships, there was no difference in N170 latency among the faces. This dissertation provides evidence that at a neural level, structural encoding of faces, indexed by the N170, occurs within 200 ms. Face recognition, indexed by the late P2 and the N250, occurs shortly afterwards between 200 and 300 ms. Social cognitive factors can also influence face processing. The effect is already evident as early as 130-200 ms at the structural encoding stage.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

La version intégrale de cette thèse est disponible uniquement pour consultation individuelle à la Bibliothèque de musique de l’Université de Montréal (www.bib.umontreal.ca/MU).

Relevância:

10.00% 10.00%

Publicador:

Resumo:

We present a new method for rendering novel images of flexible 3D objects from a small number of example images in correspondence. The strength of the method is the ability to synthesize images whose viewing position is significantly far away from the viewing cone of the example images ("view extrapolation"), yet without ever modeling the 3D structure of the scene. The method relies on synthesizing a chain of "trilinear tensors" that governs the warping function from the example images to the novel image, together with a multi-dimensional interpolation function that synthesizes the non-rigid motions of the viewed object from the virtual camera position. We show that two closely spaced example images alone are sufficient in practice to synthesize a significant viewing cone, thus demonstrating the ability of representing an object by a relatively small number of model images --- for the purpose of cheap and fast viewers that can run on standard hardware.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Most psychophysical studies of object recognition have focussed on the recognition and representation of individual objects subjects had previously explicitely been trained on. Correspondingly, modeling studies have often employed a 'grandmother'-type representation where the objects to be recognized were represented by individual units. However, objects in the natural world are commonly members of a class containing a number of visually similar objects, such as faces, for which physiology studies have provided support for a representation based on a sparse population code, which permits generalization from the learned exemplars to novel objects of that class. In this paper, we present results from psychophysical and modeling studies intended to investigate object recognition in natural ('continuous') object classes. In two experiments, subjects were trained to perform subordinate level discrimination in a continuous object class - images of computer-rendered cars - created using a 3D morphing system. By comparing the recognition performance of trained and untrained subjects we could estimate the effects of viewpoint-specific training and infer properties of the object class-specific representation learned as a result of training. We then compared the experimental findings to simulations, building on our recently presented HMAX model of object recognition in cortex, to investigate the computational properties of a population-based object class representation as outlined above. We find experimental evidence, supported by modeling results, that training builds a viewpoint- and class-specific representation that supplements a pre-existing repre-sentation with lower shape discriminability but possibly greater viewpoint invariance.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

abstract With many visual speech animation techniques now available, there is a clear need for systematic perceptual evaluation schemes. We describe here our scheme and its application to a new video-realistic (potentially indistinguishable from real recorded video) visual-speech animation system, called Mary 101. Two types of experiments were performed: a) distinguishing visually between real and synthetic image- sequences of the same utterances, ("Turing tests") and b) gauging visual speech recognition by comparing lip-reading performance of the real and synthetic image-sequences of the same utterances ("Intelligibility tests"). Subjects that were presented randomly with either real or synthetic image-sequences could not tell the synthetic from the real sequences above chance level. The same subjects when asked to lip-read the utterances from the same image-sequences recognized speech from real image-sequences significantly better than from synthetic ones. However, performance for both, real and synthetic, were at levels suggested in the literature on lip-reading. We conclude from the two experiments that the animation of Mary 101 is adequate for providing a percept of a talking head. However, additional effort is required to improve the animation for lip-reading purposes like rehabilitation and language learning. In addition, these two tasks could be considered as explicit and implicit perceptual discrimination tasks. In the explicit task (a), each stimulus is classified directly as a synthetic or real image-sequence by detecting a possible difference between the synthetic and the real image-sequences. The implicit perceptual discrimination task (b) consists of a comparison between visual recognition of speech of real and synthetic image-sequences. Our results suggest that implicit perceptual discrimination is a more sensitive method for discrimination between synthetic and real image-sequences than explicit perceptual discrimination.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Las capacidades dinámicas constituyen un aporte importante a la estrategia empresarial. De acuerdo con esta premisa se desarrolla el siguiente documento, al reconocer que la generación de competencias se consolida como la base teórica para el logro de sostenibilidad ante eventos de cambio que puedan afectar la estabilidad y la toma de decisiones de las organizaciones. Dada la falta de aplicación empírica del concepto se ha elaborado este paper, en el que se demuestran e identifican las herramientas que la aplicación empiríca puede dar a las organizaciones y los instrumentos que proveen para la generación de valor. A través del caso de estudio ASOS.COM se ejemplifica la necesidad de detección y aprovechamiento de oportunidades y amenazas, así como la reconfiguración, renovación y generación de competencias de segundo orden para enfrentar el cambio. De esta manera por medio de las habilidades creadas al interior de las empresas con enfoque en el aprendizaje e innovación se logra la comprensión del negocio y el afianzamiento de mejores escenarios futuros.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

El desarrollo de la presente investigación, centra su atención en las capacidades dinámicas que influyen en la operación de la Red de Turismo de La Candelaria de Bogotá. Para este fin, se realizó una encuesta a 100 directivos o dueños de las empresas que conforman dicha red, y que es una muestra significativa para los propósitos de la investigación, puesto que permite describir a nivel de la empresa y a nivel de la red, la influencia de las capacidades dinámicas de absorción, adaptación e innovación. Como resultados, se obtuvieron que al nivel de empresas las tres capacidades dinámicas influyen en la operación de la misma, encontrándose una mayor relación entre las capacidades de “Innovación – Adaptación"; a nivel de red empresarial ocurre lo contrario, puesto que la relación de las capacidades dinámicas de “Innovación – Adaptación” es nula, mientras que las relaciones entre “Absorción – Innovación” y “Absorción – Adaptación” poseen una alta relación para la operación de la red. Lo anterior, se deriva del análisis realizado de los datos tabulados de la encuesta aplicada a las empresas de la red de turismo, con los estudios empíricos hallados que proponen escalas de medición para las capacidades dinámicas de absorción, adaptación e innovación, y el marco teórico elaborado como soporte para la presente investigación.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Although tactile representations of the two body sides are initially segregated into opposite hemispheres of the brain, behavioural interactions between body sides exist and can be revealed under conditions of tactile double simultaneous stimulation (DSS) at the hands. Here we examined to what extent vision can affect body side segregation in touch. To this aim, we changed hand-related visual input while participants performed a go/no-go task to detect a tactile stimulus delivered to one target finger (e.g., right index), stimulated alone or with a concurrent non-target finger either on the same hand (e.g., right middle finger) or on the other hand (e.g., left index finger = homologous; left middle finger = non-homologous). Across experiments, the two hands were visible or occluded from view (Experiment 1), images of the two hands were either merged using a morphing technique (Experiment 2), or were shown in a compatible vs incompatible position with respect to the actual posture (Experiment 3). Overall, the results showed reliable interference effects of DSS, as compared to target-only stimulation. This interference varied as a function of which non-target finger was stimulated, and emerged both within and between hands. These results imply that the competition between tactile events is not clearly segregated across body sides. Crucially, non-informative vision of the hand affected overall tactile performance only when a visual/proprioceptive conflict was present, while neither congruent nor morphed hand vision affected tactile DSS interference. This suggests that DSS operates at a tactile processing stage in which interactions between body sides can occur regardless of the available visual input from the body.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Tropical Applications of Meteorology Using Satellite and Ground-Based Observations (TAMSAT) rainfall estimates are used extensively across Africa for operational rainfall monitoring and food security applications; thus, regional evaluations of TAMSAT are essential to ensure its reliability. This study assesses the performance of TAMSAT rainfall estimates, along with the African Rainfall Climatology (ARC), version 2; the Tropical Rainfall Measuring Mission (TRMM) 3B42 product; and the Climate Prediction Center morphing technique (CMORPH), against a dense rain gauge network over a mountainous region of Ethiopia. Overall, TAMSAT exhibits good skill in detecting rainy events but underestimates rainfall amount, while ARC underestimates both rainfall amount and rainy event frequency. Meanwhile, TRMM consistently performs best in detecting rainy events and capturing the mean rainfall and seasonal variability, while CMORPH tends to overdetect rainy events. Moreover, the mean difference in daily rainfall between the products and rain gauges shows increasing underestimation with increasing elevation. However, the distribution in satellite–gauge differences demon- strates that although 75% of retrievals underestimate rainfall, up to 25% overestimate rainfall over all eleva- tions. Case studies using high-resolution simulations suggest underestimation in the satellite algorithms is likely due to shallow convection with warm cloud-top temperatures in addition to beam-filling effects in microwave- based retrievals from localized convective cells. The overestimation by IR-based algorithms is attributed to nonraining cirrus with cold cloud-top temperatures. These results stress the importance of understanding re- gional precipitation systems causing uncertainties in satellite rainfall estimates with a view toward using this knowledge to improve rainfall algorithms.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Catastrophe risk models used by the insurance industry are likely subject to significant uncertainty, but due to their proprietary nature and strict licensing conditions they are not available for experimentation. In addition, even if such experiments were conducted, these would not be repeatable by other researchers because commercial confidentiality issues prevent the details of proprietary catastrophe model structures from being described in public domain documents. However, such experimentation is urgently required to improve decision making in both insurance and reinsurance markets. In this paper we therefore construct our own catastrophe risk model for flooding in Dublin, Ireland, in order to assess the impact of typical precipitation data uncertainty on loss predictions. As we consider only a city region rather than a whole territory and have access to detailed data and computing resources typically unavailable to industry modellers, our model is significantly more detailed than most commercial products. The model consists of four components, a stochastic rainfall module, a hydrological and hydraulic flood hazard module, a vulnerability module, and a financial loss module. Using these we undertake a series of simulations to test the impact of driving the stochastic event generator with four different rainfall data sets: ground gauge data, gauge-corrected rainfall radar, meteorological reanalysis data (European Centre for Medium-Range Weather Forecasts Reanalysis-Interim; ERA-Interim) and a satellite rainfall product (The Climate Prediction Center morphing method; CMORPH). Catastrophe models are unusual because they use the upper three components of the modelling chain to generate a large synthetic database of unobserved and severe loss-driving events for which estimated losses are calculated. We find the loss estimates to be more sensitive to uncertainties propagated from the driving precipitation data sets than to other uncertainties in the hazard and vulnerability modules, suggesting that the range of uncertainty within catastrophe model structures may be greater than commonly believed.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The emergence and development of digital imaging technologies and their impact on mainstream filmmaking is perhaps the most familiar special effects narrative associated with the years 1981-1999. This is in part because some of the questions raised by the rise of the digital still concern us now, but also because key milestone films showcasing advancements in digital imaging technologies appear in this period, including Tron (1982) and its computer generated image elements, the digital morphing in The Abyss (1989) and Terminator 2: Judgment Day (1991), computer animation in Jurassic Park (1993) and Toy Story (1995), digital extras in Titanic (1997), and ‘bullet time’ in The Matrix (1999). As a result it is tempting to characterize 1981-1999 as a ‘transitional period’ in which digital imaging processes grow in prominence and technical sophistication, and what we might call ‘analogue’ special effects processes correspondingly become less common. But such a narrative risks eliding the other practices that also shape effects sequences in this period. Indeed, the 1980s and 1990s are striking for the diverse range of effects practices in evidence in both big budget films and lower budget productions, and for the extent to which analogue practices persist independently of or alongside digital effects work in a range of production and genre contexts. The chapter seeks to document and celebrate this diversity and plurality, this sustaining of earlier traditions of effects practice alongside newer processes, this experimentation with materials and technologies old and new in the service of aesthetic aspirations alongside budgetary and technical constraints. The common characterization of the period as a series of rapid transformations in production workflows, practices and technologies will be interrogated in relation to the persistence of certain key figures as Douglas Trumbull, John Dykstra, and James Cameron, but also through a consideration of the contexts for and influences on creative decision-making. Comparative analyses of the processes used to articulate bodies, space and scale in effects sequences drawn from different generic sites of special effects work, including science fiction, fantasy, and horror, will provide a further frame for the chapter’s mapping of the commonalities and specificities, continuities and variations in effects practices across the period. In the process, the chapter seeks to reclaim analogue processes’ contribution both to moments of explicit spectacle, and to diegetic verisimilitude, in the decades most often associated with the digital’s ‘arrival’.