995 resultados para processing capacity


Relevância:

60.00% 60.00%

Publicador:

Resumo:

O Brasil, destacado pela sua espetacular “geodiversidade”, já se coloca no grupo dos grandes produtores e exportadores mundiais do setor de rochas. Sua produção inclui granitos, ardósias, quartzitos, mármores, travertinos, pedra-sabão, serpentinitos, calcários, conglomerados, basaltos, gnaisses foliados e várias outras rochas, somando cerca de 6 milhões de t/ano e abrangendo 600 variedades comerciais, derivadas de 1.500 frentes de lavra. Os 18 arranjos produtivos locais (APL’s) do setor de rochas, identificados no Brasil, envolvem atividades mínero-industriais em 10 estados e 80 municípios, nas regiões Sudeste, Sul, Centro-Oeste, Norte e Nordeste. Mais amplamente, são registrados 370 municípios com recolhimento da CFEM (Compensação Financeira pela Exploração Mineral) para extração de rochas de revestimento. Estima-se a existência de 11.500 empresas do setor de rochas atuantes no Brasil, responsáveis pela geração de 120.000 empregos diretos e por um parque de beneficiamento com capacidade de serragem e polimento para 50 milhões m2/ano em granitos, mármores, travertinos, e de mais 40 milhões m2/ano para rochas de processamento simples, sobretudo ardósias, basaltos laminados, quartzitos e gnaisses foliados. As transações comerciais do setor nos mercados interno e externo, incluindo-se negócios com máquinas, equipamentos e insumos, movimentam cerca de US$ 2,5 bilhões/ano. As exportações do setor somaram US$ 429,4 milhões em 2003 e estão atendendo cerca de 90 países, destacando-se que o Brasil já é o principal fornecedor de granitos beneficiados para os EUA, além de ser o segundo maior exportador mundial de ardósias.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Coordenação de Aperfeiçoamento de Pessoal de Nível Superior (CAPES)

Relevância:

60.00% 60.00%

Publicador:

Resumo:

In this article, the authors investigate, from an interdisciplinary perspective, possible ethical implications of the presence of ubiquitous computing systems in human perception/action. The term ubiquitous computing is used to characterize information-processing capacity from computers that are available everywhere and all the time, integrated into everyday objects and activities. The contrast in approach to aspects of ubiquitous computing between traditional considerations of ethical issues and the Ecological Philosophy view concerning its possible consequences in the context of perception/action are the underlying themes of this paper. The focus is on an analysis of how the generalized dissemination of microprocessors in embedded systems, commanded by a ubiquitous computing system, can affect the behaviour of people considered as embodied embedded agents.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

The present paper introduces a new model of fuzzy neuron, one which increases the computational power of the artificial neuron, turning it also into a symbolic processing device. This model proposes the synapsis to be symbolically and numerically defined, by means of the assignment of tokens to the presynaptic and postsynaptic neurons. The matching or concatenation compatibility between these tokens is used to decided about the possible connections among neurons of a given net. The strength of the compatible synapsis is made dependent on the amount of the available presynaptic and post synaptic tokens. The symbolic and numeric processing capacity of the new fuzzy neuron is used here to build a neural net (JARGON) to disclose the existing knowledge in natural language data bases such as medical files, set of interviews, and reports about engineering operations.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

With the rapid growth of the use of Web applications in various fields of knowledge, the term Web service enter into evidence in the current scenario, which refers to services from different origins and purpose, offered through local networks and also available in some cases, on the Internet. The architecture of this type of application offers data processing on server side thereby, running applications and complex and slow processes is very interesting, which is the case with most algorithms involving visualization. The VTK is a library intended for visualization, and features a large variety of methods and algorithms for this purpose, but with a graphics engine that requires processing capacity. The union of these two resources can bring interesting results and contribute for performance improvements in the VTK library. This study is discussed in this project, through testing and communication overhead analysis

Relevância:

60.00% 60.00%

Publicador:

Resumo:

The technologies are rapidly developing, but some of them present in the computers, as for instance their processing capacity, are reaching their physical limits. It is up to quantum computation offer solutions to these limitations and issues that may arise. In the field of information security, encryption is of paramount importance, being then the development of quantum methods instead of the classics, given the computational power offered by quantum computing. In the quantum world, the physical states are interrelated, thus occurring phenomenon called entanglement. This study presents both a theoretical essay on the merits of quantum mechanics, computing, information, cryptography and quantum entropy, and some simulations, implementing in C language the effects of entropy of entanglement of photons in a data transmission, using Von Neumann entropy and Tsallis entropy.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Bioinformational theory has been proposed by Lang (1979a), who suggests that mental images can be understood as products of the brain's information processing capacity. Imagery involves activation of a network of propositionally coded information stored in long-term memory. Propositions concerning physiological and behavioral responses provide a prototype for overt behavior. Processing of response information is associated with somatovisceral arousal. The theory has implications for imagery rehearsal in sport psychology and can account for a variety of findings in the mental practice literature. Hypotheses drawn from bioinformational theory were tested. College athletes imagined four scenes during which their heart rates were recorded. Subjects tended to show increases in heart rate when imagining scenes with which they had personal experience and which would involve cardiovascular activation if experienced in real life. Nonsignificant heart rate changes were found when the scene involved activation but was one with which subjects did not have personal experience.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

BACKGROUND: There is converging evidence for the notion that pain affects a broad range of attentional domains. This study investigated the influence of pain on the involuntary capture of attention as indexed by the P3a component in the event-related potential derived from the electroencephalogram. METHODS: Participants performed in an auditory oddball task in a pain-free and a pain condition during which they submerged a hand in cold water. Novel, infrequent and unexpected auditory stimuli were presented randomly in a series of frequent standard and infrequent target tones. P3a and P3b amplitudes were observed to novel, unexpected and target-related stimuli, respectively. RESULTS: Both electrophysiological components were characterized by reduced amplitudes in the pain compared with the pain-free condition. Hit rate and reaction time to target stimuli did not differ between the two conditions presumably because the experimental task was not difficult enough to exceed attentional capacities under pain conditions. CONCLUSIONS: These results indicate that voluntary attention serving the maintenance and control of ongoing information processing (reflected by the P3b amplitude) is impaired by pain. In addition, the involuntary capture of attention and orientation to novel, unexpected information (measured by the P3a) is also impaired by pain. Thus, neurophysiological measures examined in this study support the theoretical positions proposing that pain can reduce attentional processing capacity. These findings have potentially important implications at the theoretical level for our understanding of the interplay of pain and cognition, and at the therapeutic level for the clinical treatment of individuals experiencing ongoing pain.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

We present a novel framework for encoding latency analysis of arbitrary multiview video coding prediction structures. This framework avoids the need to consider an specific encoder architecture for encoding latency analysis by assuming an unlimited processing capacity on the multiview encoder. Under this assumption, only the influence of the prediction structure and the processing times have to be considered, and the encoding latency is solved systematically by means of a graph model. The results obtained with this model are valid for a multiview encoder with sufficient processing capacity and serve as a lower bound otherwise. Furthermore, with the objective of low latency encoder design with low penalty on rate-distortion performance, the graph model allows us to identify the prediction relationships that add higher encoding latency to the encoder. Experimental results for JMVM prediction structures illustrate how low latency prediction structures with a low rate-distortion penalty can be derived in a systematic manner using the new model.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Multi-user videoconferencing systems offer communication between more than two users, who are able to interact through their webcams, microphones and other components. The use of these systems has been increased recently due to, on the one hand, improvements in Internet access, networks of companies, universities and houses, whose available bandwidth has been increased whilst the delay in sending and receiving packets has decreased. On the other hand, the advent of Rich Internet Applications (RIA) means that a large part of web application logic and control has started to be implemented on the web browsers. This has allowed developers to create web applications with a level of complexity comparable to traditional desktop applications, running on top of the Operating Systems. More recently the use of Cloud Computing systems has improved application scalability and involves a reduction in the price of backend systems. This offers the possibility of implementing web services on the Internet with no need to spend a lot of money when deploying infrastructures and resources, both hardware and software. Nevertheless there are not many initiatives that aim to implement videoconferencing systems taking advantage of Cloud systems. This dissertation proposes a set of techniques, interfaces and algorithms for the implementation of videoconferencing systems in public and private Cloud Computing infrastructures. The mechanisms proposed here are based on the implementation of a basic videoconferencing system that runs on the web browser without any previous installation requirements. To this end, the development of this thesis starts from a RIA application with current technologies that allow users to access their webcams and microphones from the browser, and to send captured data through their Internet connections. Furthermore interfaces have been implemented to allow end users to participate in videoconferencing rooms that are managed in different Cloud provider servers. To do so this dissertation starts from the results obtained from the previous techniques and backend resources were implemented in the Cloud. A traditional videoconferencing service which was implemented in the department was modified to meet typical Cloud Computing infrastructure requirements. This allowed us to validate whether Cloud Computing public infrastructures are suitable for the traffic generated by this kind of system. This analysis focused on the network level and processing capacity and stability of the Cloud Computing systems. In order to improve this validation several other general considerations were taken in order to cover more cases, such as multimedia data processing in the Cloud, as research activity has increased in this area in recent years. The last stage of this dissertation is the design of a new methodology to implement these kinds of applications in hybrid clouds reducing the cost of videoconferencing systems. Finally, this dissertation opens up a discussion about the conclusions obtained throughout this study, resulting in useful information from the different stages of the implementation of videoconferencing systems in Cloud Computing systems. RESUMEN Los sistemas de videoconferencia multiusuario permiten la comunicación entre más de dos usuarios que pueden interactuar a través de cámaras de video, micrófonos y otros elementos. En los últimos años el uso de estos sistemas se ha visto incrementado gracias, por un lado, a la mejora de las redes de acceso en las conexiones a Internet en empresas, universidades y viviendas, que han visto un aumento del ancho de banda disponible en dichas conexiones y una disminución en el retardo experimentado por los datos enviados y recibidos. Por otro lado también ayudó la aparación de las Aplicaciones Ricas de Internet (RIA) con las que gran parte de la lógica y del control de las aplicaciones web comenzó a ejecutarse en los mismos navegadores. Esto permitió a los desarrolladores la creación de aplicaciones web cuya complejidad podía compararse con la de las tradicionales aplicaciones de escritorio, ejecutadas directamente por los sistemas operativos. Más recientemente el uso de sistemas de Cloud Computing ha mejorado la escalabilidad y el abaratamiento de los costes para sistemas de backend, ofreciendo la posibilidad de implementar servicios Web en Internet sin la necesidad de grandes desembolsos iniciales en las áreas de infraestructuras y recursos tanto hardware como software. Sin embargo no existen aún muchas iniciativas con el objetivo de realizar sistemas de videoconferencia que aprovechen las ventajas del Cloud. Esta tesis doctoral propone un conjunto de técnicas, interfaces y algoritmos para la implentación de sistemas de videoconferencia en infraestructuras tanto públicas como privadas de Cloud Computing. Las técnicas propuestas en la tesis se basan en la realización de un servicio básico de videoconferencia que se ejecuta directamente en el navegador sin la necesidad de instalar ningún tipo de aplicación de escritorio. Para ello el desarrollo de esta tesis parte de una aplicación RIA con tecnologías que hoy en día permiten acceder a la cámara y al micrófono directamente desde el navegador, y enviar los datos que capturan a través de la conexión de Internet. Además se han implementado interfaces que permiten a usuarios finales la participación en salas de videoconferencia que se ejecutan en servidores de proveedores de Cloud. Para ello se partió de los resultados obtenidos en las técnicas anteriores de ejecución de aplicaciones en el navegador y se implementaron los recursos de backend en la nube. Además se modificó un servicio ya existente implementado en el departamento para adaptarlo a los requisitos típicos de las infraestructuras de Cloud Computing. Alcanzado este punto se procedió a analizar si las infraestructuras propias de los proveedores públicos de Cloud Computing podrían soportar el tráfico generado por los sistemas que se habían adaptado. Este análisis se centró tanto a nivel de red como a nivel de capacidad de procesamiento y estabilidad de los sistemas. Para los pasos de análisis y validación de los sistemas Cloud se tomaron consideraciones más generales para abarcar casos como el procesamiento de datos multimedia en la nube, campo en el que comienza a haber bastante investigación en los últimos años. Como último paso se ideó una metodología de implementación de este tipo de aplicaciones para que fuera posible abaratar los costes de los sistemas de videoconferencia haciendo uso de clouds híbridos. Finalmente en la tesis se abre una discusión sobre las conclusiones obtenidas a lo largo de este amplio estudio, obteniendo resultados útiles en las distintas etapas de implementación de los sistemas de videoconferencia en la nube.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Many applications in several domains such as telecommunications, network security, large scale sensor networks, require online processing of continuous data lows. They produce very high loads that requires aggregating the processing capacity of many nodes. Current Stream Processing Engines do not scale with the input load due to single-node bottlenecks. Additionally, they are based on static con?gurations that lead to either under or over-provisioning. In this paper, we present StreamCloud, a scalable and elastic stream processing engine for processing large data stream volumes. StreamCloud uses a novel parallelization technique that splits queries into subqueries that are allocated to independent sets of nodes in a way that minimizes the distribution overhead. Its elastic protocols exhibit low intrusiveness, enabling effective adjustment of resources to the incoming load. Elasticity is combined with dynamic load balancing to minimize the computational resources used. The paper presents the system design, implementation and a thorough evaluation of the scalability and elasticity of the fully implemented system.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Esta tesis presenta un novedoso marco de referencia para el análisis y optimización del retardo de codificación y descodificación para vídeo multivista. El objetivo de este marco de referencia es proporcionar una metodología sistemática para el análisis del retardo en codificadores y descodificadores multivista y herramientas útiles en el diseño de codificadores/descodificadores para aplicaciones con requisitos de bajo retardo. El marco de referencia propuesto caracteriza primero los elementos que tienen influencia en el comportamiento del retardo: i) la estructura de predicción multivista, ii) el modelo hardware del codificador/descodificador y iii) los tiempos de proceso de cuadro. En segundo lugar, proporciona algoritmos para el cálculo del retardo de codificación/ descodificación de cualquier estructura arbitraria de predicción multivista. El núcleo de este marco de referencia consiste en una metodología para el análisis del retardo de codificación/descodificación multivista que es independiente de la arquitectura hardware del codificador/descodificador, completada con un conjunto de modelos que particularizan este análisis del retardo con las características de la arquitectura hardware del codificador/descodificador. Entre estos modelos, aquellos basados en teoría de grafos adquieren especial relevancia debido a su capacidad de desacoplar la influencia de los diferentes elementos en el comportamiento del retardo en el codificador/ descodificador, mediante una abstracción de su capacidad de proceso. Para revelar las posibles aplicaciones de este marco de referencia, esta tesis presenta algunos ejemplos de su utilización en problemas de diseño que afectan a codificadores y descodificadores multivista. Este escenario de aplicación cubre los siguientes casos: estrategias para el diseño de estructuras de predicción que tengan en consideración requisitos de retardo además del comportamiento tasa-distorsión; diseño del número de procesadores y análisis de los requisitos de velocidad de proceso en codificadores/ descodificadores multivista dado un retardo objetivo; y el análisis comparativo del comportamiento del retardo en codificadores multivista con diferentes capacidades de proceso e implementaciones hardware. ABSTRACT This thesis presents a novel framework for the analysis and optimization of the encoding and decoding delay for multiview video. The objective of this framework is to provide a systematic methodology for the analysis of the delay in multiview encoders and decoders and useful tools in the design of multiview encoders/decoders for applications with low delay requirements. The proposed framework characterizes firstly the elements that have an influence in the delay performance: i) the multiview prediction structure ii) the hardware model of the encoder/decoder and iii) frame processing times. Secondly, it provides algorithms for the computation of the encoding/decoding delay of any arbitrary multiview prediction structure. The core of this framework consists in a methodology for the analysis of the multiview encoding/decoding delay that is independent of the hardware architecture of the encoder/decoder, which is completed with a set of models that particularize this delay analysis with the characteristics of the hardware architecture of the encoder/decoder. Among these models, the ones based in graph theory acquire special relevance due to their capacity to detach the influence of the different elements in the delay performance of the encoder/decoder, by means of an abstraction of its processing capacity. To reveal possible applications of this framework, this thesis presents some examples of its utilization in design problems that affect multiview encoders and decoders. This application scenario covers the following cases: strategies for the design of prediction structures that take into consideration delay requirements in addition to the rate-distortion performance; design of number of processors and analysis of processor speed requirements in multiview encoders/decoders given a target delay; and comparative analysis of the encoding delay performance of multiview encoders with different processing capabilities and hardware implementations.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

We present a novel framework for the analysis and optimization of encoding latency for multiview video. Firstly, we characterize the elements that have an influence in the encoding latency performance: (i) the multiview prediction structure and (ii) the hardware encoder model. Then, we provide algorithms to find the encoding latency of any arbitrary multiview prediction structure. The proposed framework relies on the directed acyclic graph encoder latency (DAGEL) model, which provides an abstraction of the processing capacity of the encoder by considering an unbounded number of processors. Using graph theoretic algorithms, the DAGEL model allows us to compute the encoding latency of a given prediction structure, and determine the contribution of the prediction dependencies to it. As an example of DAGEL application, we propose an algorithm to reduce the encoding latency of a given multiview prediction structure up to a target value. In our approach, a minimum number of frame dependencies are pruned, until the latency target value is achieved, thus minimizing the degradation of the rate-distortion performance due to the removal of the prediction dependencies. Finally, we analyze the latency performance of the DAGEL derived prediction structures in multiview encoders with limited processing capacity.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Unfavorable environmental and developmental conditions may cause disturbances in protein folding in the endoplasmic reticulum (ER) that are recognized and counteracted by components of the Unfolded Protein Response (UPR) signaling pathways. The early cellular responses include transcriptional changes to increase the folding and processing capacity of the ER. In this study, we systematically screened a collection of inducible transgenic Arabidopsis plants expressing a library of transcription factors for resistance toward UPR-inducing chemicals. We identified 23 candidate genes that may function as novel regulators of the UPR and of which only three genes (bZIP10, TBF1, and NF-YB3) were previously associated with the UPR. The putative role of identified candidate genes in the UPR signaling is supported by favorable expression patterns in both developmental and stress transcriptional analyses. We demonstrated that WRKY75 is a genuine regulator of the ER-stress cellular responses as its expression was found to be directly responding to ER stress-inducing chemicals. In addition, transgenic Arabidopsis plants expressing WRKY75 showed resistance toward salt stress, connecting abiotic and ER-stress responses.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Endoproteolytic processing of the human protein C (HPC) precursor to its mature form involves cleavage of the propeptide after amino acids Lys-2-Arg-1 and removal of a Lys156-Arg157 dipeptide connecting the light and heavy chains. This processing was inefficient in the mammary gland of transgenic mice and pigs. We hypothesized that the protein processing capacity of specific animal organs may be improved by the coexpression of selected processing enzymes. We tested this by targeting expression of the human proprotein processing enzyme, named paired basic amino acid cleaving enzyme (PACE)/furin, or an enzymatically inactive mutant, PACEM, to the mouse mammary gland. In contrast to mice expressing HPC alone, or to HPC/PACEM bigenic mice, coexpression of PACE with HPC resulted in efficient conversion of the precursor to mature protein, with cleavage at the appropriate sites. These results suggest the involvement of PACE in the processing of HPC in vivo and represent an example of the engineering of animal organs into bioreactors with enhanced protein processing capacity.