995 resultados para processing capacity
Resumo:
BACKGROUND Adipose tissue lipid storage and processing capacity can be a key factor for obesity-related metabolic disorders such as insulin resistance and diabetes. Lipid uptake is the first step to adipose tissue lipid storage. The aim of this study was to analyze the gene expression of factors involved in lipid uptake and processing in subcutaneous (SAT) and visceral (VAT) adipose tissue according to body mass index (BMI) and the degree of insulin resistance (IR). METHODS AND PRINCIPAL FINDINGS VLDL receptor (VLDLR), lipoprotein lipase (LPL), acylation stimulating protein (ASP), LDL receptor-related protein 1 (LRP1) and fatty acid binding protein 4 (FABP4) gene expression was measured in VAT and SAT from 28 morbidly obese patients with Type 2 Diabetes Mellitus (T2DM) or high IR, 10 morbidly obese patients with low IR, 10 obese patients with low IR and 12 lean healthy controls. LPL, FABP4, LRP1 and ASP expression in VAT was higher in lean controls. In SAT, LPL and FABP4 expression were also higher in lean controls. BMI, plasma insulin levels and HOMA-IR correlated negatively with LPL expression in both VAT and SAT as well as with FABP4 expression in VAT. FABP4 gene expression in SAT correlated inversely with BMI and HOMA-IR. However, multiple regression analysis showed that BMI was the main variable contributing to LPL and FABP4 gene expression in both VAT and SAT. CONCLUSIONS Morbidly obese patients have a lower gene expression of factors related with lipid uptake and processing in comparison with healthy lean persons.
Resumo:
Processing efficiency theory predicts that anxiety reduces the processing capacity of working memory and has detrimental effects on performance. When tasks place little demand on working memory, the negative effects of anxiety can be avoided by increasing effort. Although performance efficiency decreases, there is no change in performance effectiveness. When tasks impose a heavy demand on working memory, however, anxiety leads to decrements in efficiency and effectiveness. These presumptions were tested using a modified table tennis task that placed low (LWM) and high (HWM) demands on working memory. Cognitive anxiety was manipulated through a competitive ranking structure and prize money. Participants' accuracy in hitting concentric circle targets in predetermined sequences was taken as a measure of performance effectiveness, while probe reaction time (PRT), perceived mental effort (RSME), visual search data, and arm kinematics were recorded as measures of efficiency. Anxiety had a negative effect on performance effectiveness in both LWM and HWM tasks. There was an increase in frequency of gaze and in PRT and RSME values in both tasks under high vs. low anxiety conditions, implying decrements in performance efficiency. However, participants spent more time tracking the ball in the HWM task and employed a shorter tau margin when anxious. Although anxiety impaired performance effectiveness and efficiency, decrements in efficiency were more pronounced in the HWM task than in the LWM task, providing support for processing efficiency theory.
Resumo:
In recent years, applications in domains such as telecommunications, network security or large scale sensor networks showed the limits of the traditional store-then-process paradigm. In this context, Stream Processing Engines emerged as a candidate solution for all these applications demanding for high processing capacity with low processing latency guarantees. With Stream Processing Engines, data streams are not persisted but rather processed on the fly, producing results continuously. Current Stream Processing Engines, either centralized or distributed, do not scale with the input load due to single-node bottlenecks. Moreover, they are based on static configurations that lead to either under or over-provisioning. This Ph.D. thesis discusses StreamCloud, an elastic paralleldistributed stream processing engine that enables for processing of large data stream volumes. Stream- Cloud minimizes the distribution and parallelization overhead introducing novel techniques that split queries into parallel subqueries and allocate them to independent sets of nodes. Moreover, Stream- Cloud elastic and dynamic load balancing protocols enable for effective adjustment of resources depending on the incoming load. Together with the parallelization and elasticity techniques, Stream- Cloud defines a novel fault tolerance protocol that introduces minimal overhead while providing fast recovery. StreamCloud has been fully implemented and evaluated using several real word applications such as fraud detection applications or network analysis applications. The evaluation, conducted using a cluster with more than 300 cores, demonstrates the large scalability, the elasticity and fault tolerance effectiveness of StreamCloud. Resumen En los útimos años, aplicaciones en dominios tales como telecomunicaciones, seguridad de redes y redes de sensores de gran escala se han encontrado con múltiples limitaciones en el paradigma tradicional de bases de datos. En este contexto, los sistemas de procesamiento de flujos de datos han emergido como solución a estas aplicaciones que demandan una alta capacidad de procesamiento con una baja latencia. En los sistemas de procesamiento de flujos de datos, los datos no se persisten y luego se procesan, en su lugar los datos son procesados al vuelo en memoria produciendo resultados de forma continua. Los actuales sistemas de procesamiento de flujos de datos, tanto los centralizados, como los distribuidos, no escalan respecto a la carga de entrada del sistema debido a un cuello de botella producido por la concentración de flujos de datos completos en nodos individuales. Por otra parte, éstos están basados en configuraciones estáticas lo que conducen a un sobre o bajo aprovisionamiento. Esta tesis doctoral presenta StreamCloud, un sistema elástico paralelo-distribuido para el procesamiento de flujos de datos que es capaz de procesar grandes volúmenes de datos. StreamCloud minimiza el coste de distribución y paralelización por medio de una técnica novedosa la cual particiona las queries en subqueries paralelas repartiéndolas en subconjuntos de nodos independientes. Ademas, Stream- Cloud posee protocolos de elasticidad y equilibrado de carga que permiten una optimización de los recursos dependiendo de la carga del sistema. Unidos a los protocolos de paralelización y elasticidad, StreamCloud define un protocolo de tolerancia a fallos que introduce un coste mínimo mientras que proporciona una rápida recuperación. StreamCloud ha sido implementado y evaluado mediante varias aplicaciones del mundo real tales como aplicaciones de detección de fraude o aplicaciones de análisis del tráfico de red. La evaluación ha sido realizada en un cluster con más de 300 núcleos, demostrando la alta escalabilidad y la efectividad tanto de la elasticidad, como de la tolerancia a fallos de StreamCloud.
Resumo:
Previous research using flanker paradigms suggests that peripheral distracter faces are automatically processed when participants have to classify a single central familiar target face. These distracter interference effects disappear when the central task contains additional anonymous (non-target) faces that load the search for the face target, but not when the central task contains additional non-face stimuli, suggesting there are face-specific capacity limits in visual processing. Here we tested whether manipulating the format of non-target faces in the search task affected face-specific capacity limits. Experiment 1 replicated earlier findings that a distracter face is processed even in high load conditions when participants looked for a target name of a famous person among additional names (non-targets) in a central search array. Two further experiments show that when targets and non-targets were faces (instead of names), however, distracter interference was eliminated under high load—adding non-target faces to the search array exhausted processing capacity for peripheral faces. The novel finding was that replacing non-target faces with images that consisted of two horizontally misaligned face-parts reduced distracter processing. Similar results were found when the polarity of a non-target face image was reversed. These results indicate that face-specific capacity limits are not determined by the configural properties of face processing, but by face parts.
Resumo:
Three experiments investigated the effect of complexity on children's understanding of a beam balance. In nonconflict problems, weights or distances varied, while the other was held constant. In conflict items, both weight and distance varied, and items were of three kinds: weight dominant, distance dominant, or balance (in which neither was dominant). In Experiment 1, 2-year-old children succeeded on nonconflict-weight and nonconflict-distance problems. This result was replicated in Experiment 2, but performance on conflict items did not exceed chance. In Experiment 3, 3- and 4-year-olds succeeded on all except conflict balance problems, while 5- and 6-year-olds succeeded on all problem types. The results were interpreted in terms of relational complexity theory. Children aged 2 to 4 years succeeded on problems that entailed binary relations, but 5- and 6-year-olds also succeeded on problems that entailed ternary relations. Ternary relations tasks from other domains-transitivity and class inclusion-accounted for 93% of the age-related variance in balance scale scores. (C) 2002 Elsevier Science (USA).
Resumo:
Known algorithms capable of scheduling implicit-deadline sporadic tasks over identical processors at up to 100% utilisation invariably involve numerous preemptions and migrations. To the challenge of devising a scheduling scheme with as few preemptions and migrations as possible, for a given guaranteed utilisation bound, we respond with the algorithm NPS-F. It is configurable with a parameter, trading off guaranteed schedulable utilisation (up to 100%) vs preemptions. For any possible configuration, NPS-F introduces fewer preemptions than any other known algorithm matching its utilisation bound. A clustered variant of the algorithm, for systems made of multicore chips, eliminates (costly) off-chip task migrations, by dividing processors into disjoint clusters, formed by cores on the same chip (with the cluster size being a parameter). Clusters are independently scheduled (each, using non-clustered NPS-F). The utilisation bound is only moderately affected. We also formulate an important extension (applicable to both clustered and non-clustered NPS-F) which optimises the supply of processing time to executing tasks and makes it more granular. This reduces processing capacity requirements for schedulability without increasing preemptions.
Resumo:
Consider the problem of scheduling sporadically-arriving tasks with implicit deadlines using Earliest-Deadline-First (EDF) on a single processor. The system may undergo changes in its operational modes and therefore the characteristics of the task set may change at run-time. We consider a well-established previously published mode-change protocol and we show that if every mode utilizes at most 50% of the processing capacity then all deadlines are met. We also show that there exists a task set that misses a deadline although the utilization exceeds 50% by just an arbitrarily small amount. Finally, we present, for a relevant special case, an exact schedulability test for EDF with mode change.
Resumo:
Consider the problem of scheduling real-time tasks on a multiprocessor with the goal of meeting deadlines. Tasks arrive sporadically and have implicit deadlines, that is, the deadline of a task is equal to its minimum inter-arrival time. Consider this problem to be solved with global static-priority scheduling. We present a priority-assignment scheme with the property that if at most 38% of the processing capacity is requested then all deadlines are met.
Resumo:
Com o crescimento da informação disponível na Web, arquivos pessoais e profissionais, protagonizado tanto pelo aumento da capacidade de armazenamento de dados, como pelo aumento exponencial da capacidade de processamento dos computadores, e do fácil acesso a essa mesma informação, um enorme fluxo de produção e distribuição de conteúdos audiovisuais foi gerado. No entanto, e apesar de existirem mecanismos para a indexação desses conteúdos com o objectivo de permitir a pesquisa e acesso aos mesmos, estes apresentam normalmente uma grande complexidade algorítmica ou exigem a contratação de pessoal altamente qualificado, para a verificação e categorização dos conteúdos. Nesta dissertação pretende-se estudar soluções de anotação colaborativa de conteúdos e desenvolver uma ferramenta que facilite a anotação de um arquivo de conteúdos audiovisuais. A abordagem implementada é baseada no conceito dos “Jogos com Propósito” (GWAP – Game With a Purpose) e permite que os utilizadores criem tags (metadatos na forma de palavras-chave) de forma a atribuir um significado a um objecto a ser categorizado. Assim, e como primeiro objectivo, foi desenvolvido um jogo com o propósito não só de entretenimento, mas também que permita a criação de anotações audiovisuais perante os vídeos que são apresentados ao jogador e, que desta forma, se melhore a indexação e categorização dos mesmos. A aplicação desenvolvida permite ainda a visualização dos conteúdos e metadatos categorizados, e com o objectivo de criação de mais um elemento informativo, permite a inserção de um like num determinado instante de tempo do vídeo. A grande vantagem da aplicação desenvolvida reside no facto de adicionar anotações a pontos específicos do vídeo, mais concretamente aos seus instantes de tempo. Trata-se de uma funcionalidade nova, não disponível em outras aplicações de anotação colaborativa de conteúdos audiovisuais. Com isto, o acesso aos conteúdos será bastante mais eficaz pois será possível aceder, por pesquisa, a pontos específicos no interior de um vídeo.
Resumo:
A quantidade e variedade de conteúdos multimédia actualmente disponíveis cons- tituem um desafio para os utilizadores dado que o espaço de procura e escolha de fontes e conteúdos excede o tempo e a capacidade de processamento dos utilizado- res. Este problema da selecção, em função do perfil do utilizador, de informação em grandes conjuntos heterogéneos de dados é complexo e requer ferramentas específicas. Os Sistemas de Recomendação surgem neste contexto e são capazes de sugerir ao utilizador itens que se coadunam com os seus gostos, interesses ou necessidades, i.e., o seu perfil, recorrendo a metodologias de inteligência artificial. O principal objectivo desta tese é demonstrar que é possível recomendar em tempo útil conteúdos multimédia a partir do perfil pessoal e social do utilizador, recorrendo exclusivamente a fontes públicas e heterogéneas de dados. Neste sen- tido, concebeu-se e desenvolveu-se um Sistema de Recomendação de conteúdos multimédia baseado no conteúdo, i.e., nas características dos itens, no historial e preferências pessoais e nas interacções sociais do utilizador. Os conteúdos mul- timédia recomendados, i.e., os itens sugeridos ao utilizador, são provenientes da estação televisiva britânica, British Broadcasting Corporation (BBC), e estão classificados de acordo com as categorias dos programas da BBC. O perfil do utilizador é construído levando em conta o historial, o contexto, as preferências pessoais e as actividades sociais. O YouTube é a fonte do histo- rial pessoal utilizada, permitindo simular a principal fonte deste tipo de dados - a Set-Top Box (STB). O historial do utilizador é constituído pelo conjunto de vídeos YouTube e programas da BBC vistos pelo utilizador. O conteúdo dos vídeos do YouTube está classificado segundo as categorias de vídeo do próprio YouTube, sendo efectuado o mapeamento para as categorias dos programas da BBC. A informação social, que é proveniente das redes sociais Facebook e Twit- ter, é recolhida através da plataforma Beancounter. As actividades sociais do utilizador obtidas são filtradas para extrair os filmes e séries que são, por sua vez, enriquecidos semanticamente através do recurso a repositórios abertos de dados interligados. Neste caso, os filmes e séries são classificados através dos géneros da IMDb e, posteriormente, mapeados para as categorias de programas da BBC. Por último, a informação do contexto e das preferências explícitas, através da classificação dos itens recomendados, do utilizador são também contempladas. O sistema desenvolvido efectua recomendações em tempo real baseado nas actividades das redes sociais Facebook e Twitter, no historial de vídeos Youtube e de programas da BBC vistos e preferências explícitas. Foram realizados testes com cinco utilizadores e o tempo médio de resposta do sistema para criar o conjunto inicial de recomendações foi 30 s. As recomendações personalizadas são geradas e actualizadas mediante pedido expresso do utilizador.
Resumo:
IEEE International Symposium on Circuits and Systems, pp. 724 – 727, Seattle, EUA