2 resultados para Meme
em BORIS: Bern Open Repository and Information System - Berna - Suiça
Resumo:
Multimodality – the interdependence of semiotic resources in text – is an existential element of today’s media. The term multimodality attends systematically to the social interpretation of a wide range of communicational forms used in meaning making. A primary focus of social- semiotic multimodal analysis is on mapping how modal resources are used by people in a given social context. In November 2012 the “Ola ke ase” catchphrase, which is a play on “Hola ¿qué hace?”, appeared for the first time in Spain and immediately has been adopted as a Twitter hashtag and an image macro series. Its viral spread on social networks has been tremendous, being a trending topic in various Spanish-speaking countries. The objective of analysis is how language and image work together in the “Ola ke ase” meme. The interplay between text and image in one of the original memes and some of its variations is quantitatively analysed applying a social-semiotic approach. Results demonstrate how the “Ola ke ase” meme functions through its multimodal character and the non-standard orthography. The spread of uncountable variations of the meme shows the social process that goes on in the meaning making of the semiotic elements.
Resumo:
The nematode Caenorhabditis elegans is a well-known model organism used to investigate fundamental questions in biology. Motility assays of this small roundworm are designed to study the relationships between genes and behavior. Commonly, motility analysis is used to classify nematode movements and characterize them quantitatively. Over the past years, C. elegans' motility has been studied across a wide range of environments, including crawling on substrates, swimming in fluids, and locomoting through microfluidic substrates. However, each environment often requires customized image processing tools relying on heuristic parameter tuning. In the present study, we propose a novel Multi-Environment Model Estimation (MEME) framework for automated image segmentation that is versatile across various environments. The MEME platform is constructed around the concept of Mixture of Gaussian (MOG) models, where statistical models for both the background environment and the nematode appearance are explicitly learned and used to accurately segment a target nematode. Our method is designed to simplify the burden often imposed on users; here, only a single image which includes a nematode in its environment must be provided for model learning. In addition, our platform enables the extraction of nematode ‘skeletons’ for straightforward motility quantification. We test our algorithm on various locomotive environments and compare performances with an intensity-based thresholding method. Overall, MEME outperforms the threshold-based approach for the overwhelming majority of cases examined. Ultimately, MEME provides researchers with an attractive platform for C. elegans' segmentation and ‘skeletonizing’ across a wide range of motility assays.