31 resultados para Scalable Intelligence

em Universidad Politécnica de Madrid


Relevância:

20.00% 20.00%

Publicador:

Resumo:

The progressive depletion of fossil fuels and their high contribution to the energy supply in this modern society forces that will be soon replaced by renewable fuels. But the dispersion and alternation of renewable energy production also undertake to reduce their costs to use as energy storage and hydrogen carrier. It is necessary to develop technologies for hydrogen production from all renewable energy storage technologies and the development of energy production from hydrogen fuel cells and cogeneration and tri generation systems. In order to propel this technological development discussed where the hydrogen plays a key role as energy storage and renewable energy, the National Centre of Hydrogen and Fuel Cell Technology Experimentation in Spain equipped with installations that enable scientific and technological design, develop, verify, certify, approve, test, measure and, more importantly, the facility ensures continuous operation for 24 hours a day, 365 days year. At the same time, the system is scalable so as to allow continuous adaptation of new technologies are developed and incorporated into the assembly to verify integration at the same time it checks the validity of their development. The transformation sector can be said to be the heart of the system, because without neglecting the other sectors, this should prove the validity of hydrogen as a carrier - energy storage are important efforts that have to do to demonstrate the suitability of fuel cells or internal combustion systems to realize the energy stored in hydrogen at prices competitive with conventional systems. The multiple roles to meet the fuel cells under different conditions of operation require to cover their operating conditions, many different sizes and applications. The fourth area focuses on integration is an essential complement within the installation. We must integrate not only the electricity produced, but also hydrogen is used and the heat generated in the process of using hydrogen energy. The energy management in its three forms: hydrogen chemical, electrical and thermal integration requires complicated and require a logic and artificial intelligence extremes to ensure maximum energy efficiency at the same time optimum utilization is achieved. Verification of the development and approval in the entire production system and, ultimately, as a demonstrator set to facilitate the simultaneous evolution of production technology, storage and distribution of hydrogen fuel cells has been assessed.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

A highly parallel and scalable Deblocking Filter (DF) hardware architecture for H.264/AVC and SVC video codecs is presented in this paper. The proposed architecture mainly consists on a coarse grain systolic array obtained by replicating a unique and homogeneous Functional Unit (FU), in which a whole Deblocking-Filter unit is implemented. The proposal is also based on a novel macroblock-level parallelization strategy of the filtering algorithm which improves the final performance by exploiting specific data dependences. This way communication overhead is reduced and a more intensive parallelism in comparison with the existing state-of-the-art solutions is obtained. Furthermore, the architecture is completely flexible, since the level of parallelism can be changed, according to the application requirements. The design has been implemented in a Virtex-5 FPGA, and it allows filtering 4CIF (704 × 576 pixels @30 fps) video sequences in real-time at frequencies lower than 10.16 Mhz.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Systems relying on fixed hardware components with a static level of parallelism can suffer from an underuse of logical resources, since they have to be designed for the worst-case scenario. This problem is especially important in video applications due to the emergence of new flexible standards, like Scalable Video Coding (SVC), which offer several levels of scalability. In this paper, Dynamic and Partial Reconfiguration (DPR) of modern FPGAs is used to achieve run-time variable parallelism, by using scalable architectures where the size can be adapted at run-time. Based on this proposal, a scalable Deblocking Filter core (DF), compliant with the H.264/AVC and SVC standards has been designed. This scalable DF allows run-time addition or removal of computational units working in parallel. Scalability is offered together with a scalable parallelization strategy at the macroblock (MB) level, such that when the size of the architecture changes, MB filtering order is modified accordingly

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In this paper we present a scalable software architecture for on-line multi-camera video processing, that guarantees a good trade off between computational power, scalability and flexibility. The software system is modular and its main blocks are the Processing Units (PUs), and the Central Unit. The Central Unit works as a supervisor of the running PUs and each PU manages the acquisition phase and the processing phase. Furthermore, an approach to easily parallelize the desired processing application has been presented. In this paper, as case study, we apply the proposed software architecture to a multi-camera system in order to efficiently manage multiple 2D object detection modules in a real-time scenario. System performance has been evaluated under different load conditions such as number of cameras and image sizes. The results show that the software architecture scales well with the number of camera and can easily works with different image formats respecting the real time constraints. Moreover, the parallelization approach can be used in order to speed up the processing tasks with a low level of overhead

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In this paper we investigate whether conventional text categorization methods may suffice to infer different verbal intelligence levels. This research goal relies on the hypothesis that the vocabulary that speakers make use of reflects their verbal intelligence levels. Automatic verbal intelligence estimation of users in a spoken language dialog system may be useful when defining an optimal dialog strategy by improving its adaptation capabilities. The work is based on a corpus containing descriptions (i.e. monologs) of a short film by test persons yielding different educational backgrounds and the verbal intelligence scores of the speakers. First, a one-way analysis of variance was performed to compare the monologs with the film transcription and to demonstrate that there are differences in the vocabulary used by the test persons yielding different verbal intelligence levels. Then, for the classification task, the monologs were represented as feature vectors using the classical TF–IDF weighting scheme. The Naive Bayes, k-nearest neighbors and Rocchio classifiers were tested. In this paper we describe and compare these classification approaches, define the optimal classification parameters and discuss the classification results obtained.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

One of the main challenges facing next generation Cloud platform services is the need to simultaneously achieve ease of programming, consistency, and high scalability. Big Data applications have so far focused on batch processing. The next step for Big Data is to move to the online world. This shift will raise the requirements for transactional guarantees. CumuloNimbo is a new EC-funded project led by Universidad Politécnica de Madrid (UPM) that addresses these issues via a highly scalable multi-tier transactional platform as a service (PaaS) that bridges the gap between OLTP and Big Data applications.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Many applications in several domains such as telecommunications, network security, large scale sensor networks, require online processing of continuous data lows. They produce very high loads that requires aggregating the processing capacity of many nodes. Current Stream Processing Engines do not scale with the input load due to single-node bottlenecks. Additionally, they are based on static con?gurations that lead to either under or over-provisioning. In this paper, we present StreamCloud, a scalable and elastic stream processing engine for processing large data stream volumes. StreamCloud uses a novel parallelization technique that splits queries into subqueries that are allocated to independent sets of nodes in a way that minimizes the distribution overhead. Its elastic protocols exhibit low intrusiveness, enabling effective adjustment of resources to the incoming load. Elasticity is combined with dynamic load balancing to minimize the computational resources used. The paper presents the system design, implementation and a thorough evaluation of the scalability and elasticity of the fully implemented system.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Aiming to address requirements concerning integration of services in the context of ?big data?, this paper presents an innovative approach that (i) ensures a flexible, adaptable and scalable information and computation infrastructure, and (ii) exploits the competences of stakeholders and information workers to meaningfully confront information management issues such as information characterization, classification and interpretation, thus incorporating the underlying collective intelligence. Our approach pays much attention to the issues of usability and ease-of-use, not requiring any particular programming expertise from the end users. We report on a series of technical issues concerning the desired flexibility of the proposed integration framework and we provide related recommendations to developers of such solutions. Evaluation results are also discussed.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In professional video production, users have to access to huge multimedia files simultaneously in an error-free environment, this restriction force the use of expensive disk architectures for video servers. Previous researches proposed different RAID systems for each specific task (ingest, editing, file, play-out, etc.). Video production companies have to acquire different servers with different RAIDs systems in order to support each task in the production workflow. The solution has multiples disadvantages, duplicated material in several RAIDs, duplicated material for different qualities, transfer and transcoding processes, etc. In this work, an architecture for video servers based on the spreading of JPEG200 data in different RAIDs is presented, each individual part of the data structure goes to a specific RAID type depending on the effect that produces the data on the overall image quality, the method provide a redundancy correlated with the data rank. The global storage can be used in all the different tasks of the production workflow saving disk space, redundant files and transfers procedures.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Here gray and white matter changes after four weeks of videogame practice were analyzed using optimized voxel-based morphometry (VBM), cortical surface and cortical thickness indices, and white matter integrity computed from several projection, commissural, and association tracts relevant to cognition. Beginning with a sample of one hundred young females, twenty right handed participants were recruited for the study and assigned to a practice or a control group carefully matched by their general cognitive ability scores. After the first scan, the practice group played ‘Professor Layton and The Pandora's Box’ 4 h per week during four weeks. A second scan was obtained at the end of practice and intelligence was measured again. Image analyses revealed gray and white matter changes in the practice group. Gray matter changes theoretically relevant for intelligence were observed for the practice group mainly in frontal clusters (Brodmann areas 9 and 10) and also in smaller parietal and temporal regions. White matter findings were focused in the hippocampal cingulum and the inferior longitudinal fasciculus. These gray and white matter changes presumably induced by practice did not interact with intelligence tests' scores.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This work investigates to what degree speakers with different verbal intelligence may adapt to each other. The work is based on a corpus consisting of 100 descriptions of a short film (monologues), 56 discussions about the same topic (dialogues), and verbal intelligence scores of the test participants. Adaptation between two dialogue partners was measured using cross-referencing, proportion of "I", "You" and "We" words, between-subject correlation and similarity of texts. It was shown that lower verbal intelligence speakers repeated more nouns and adjectives from the other and used the same linguistic categories more often than higher verbal intelligence speakers. In dialogues between strangers, participants with higher verbal intelligence showed a greater level of adaptation.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In this paper we investigated differences in language use of speakers yielding different verbal intelligence when they describe the same event. The work is based on a corpus containing descriptions of a short film and verbal intelligence scores of the speakers. For analyzing the monologues and the film transcript, the number of reused words, lemmas, n-grams, cosine similarity and other features were calculated and compared to each other for different verbal intelligence groups. The results showed that the similarity of monologues of higher verbal intelligence speakers was greater than of lower and average verbal intelligence participants. A possible explanation of this phenomenon is that candidates yielding higher verbal intelligence have a better short-term memory. In this paper we also checked a hypothesis that differences in vocabulary of speakers yielding different verbal intelligence are sufficient enough for good classification results. For proving this hypothesis, the Nearest Neighbor classifier was trained using TF-IDF vocabulary measures. The maximum achieved accuracy was 92.86%.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In this work we investigated whether there is a relationship between dominant behaviour of dialogue participants and their verbal intelligence. The analysis is based on a corpus containing 56 dialogues and verbal intelligence scores of the test persons. All the dialogues were divided into three groups: H-H is a group of dialogues between higher verbal intelligence participants, L-L is a group of dialogues between lower verbal intelligence participant and L-H is a group of all the other dialogues. The dominance scores of the dialogue partners from each group were analysed. The analysis showed that differences between dominance scores and verbal intelligence coefficients for L-L were positively correlated. Verbal intelligence scores of the test persons were compared to other features that may reflect dominant behaviour. The analysis showed that number of interruptions, long utterances, times grabbed the floor, influence diffusion model, number of agreements and several acoustic features may be related to verbal intelligence. These features were used for the automatic classification of the dialogue partners into two groups (lower and higher verbal intelligence participants); the achieved accuracy was 89.36%.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The growth of the Internet has increased the need for scalable congestion control mechanisms in high speed networks. In this context, we propose a rate-based explicit congestion control mechanism with which the sources are provided with the rate at which they can transmit. These rates are computed with a distributed max-min fair algorithm, SLBN. The novelty of SLBN is that it combines two interesting features not simultaneously present in existing proposals: scalability and fast convergence to the max-min fair rates, even under high session churn. SLBN is scalable because routers only maintain a constant amount of state information (only three integer variables per link) and only incur a constant amount of computation per protocol packet, independently of the number of sessions that cross the router. Additionally, SLBN does not require processing any data packet, and it converges independently of sessions' RTT. Finally, by design, the protocol is conservative when assigning rates, even in the presence of high churn, which helps preventing link overshoots in transient periods. We claim that, with all these features, our mechanism is a good candidate to be used in real deployments.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

En las últimas décadas hemos visto un rápido desarrollo de las redes de telecomunicación llegando a todos los rincones de la sociedad, bien a través de cable o bien de forma inalámbrica. Dichas redes, que cada vez son más grandes, dinámicas y complejas, integrando un mayor número de servicios y protocolos, requieren de un componente central que es el enrutamiento. El enrutamiento determina las estrategias a utilizar por los nodos de una red para encontrar las rutas óptimas entre un origen y un destino en el envío de información. Resulta difícil conseguir una estrategia que se adapte a este tipo de entornos altamente dinámicos, complejos y con un alto grado de heterogeneidad. Los algoritmos clásicos propuestos hasta la fecha suelen ser algoritmos centralizados que tratan de gestionar una arquitectura claramente distribuida, que en escenarios estacionarios pueden mantener un buen rendimiento, pero que no funcionan bien en escenarios donde se dan continuos cambios en la topología de red o en los patrones de tráfico. Es necesario proponer nuevos algoritmos que permitan el enrutamiento de forma distribuida, más adaptables a los cambios, robustos y escalables. Aquí vamos a tratar de hacer una revisión de los algoritmos propuestos inspirados en la naturaleza, particularmente en los comportamientos colectivos de sociedades de insectos. Veremos cómo de una forma descentralizada y auto-organizada, mediante agentes simples e interacciones locales, podemos alcanzar un comportamiento global "inteligente" que cumpla dichas cualidades. Por último proponemos Abira, un algoritmo ACO basado en AntNet-FA que trata de mejorar el rendimiento y la convergencia introduciendo mecanismos de exploración, de feedback negativo como la penalización y de comunicación de de las mejores rutas. Tras realizar una simulación y comparar los resultados con el algoritmo original, vemos que Abira muestra un mejor rendimiento.