611 resultados para SCIL processor


Relevância:

20.00% 20.00%

Publicador:

Resumo:

Photoplethysmography (PPG) sensors allow for noninvasive and comfortable heart-rate (HR) monitoring, suitable for compact wearable devices. However, PPG signals collected from such devices often suffer from corruption caused by motion artifacts. This is typically addressed by combining the PPG signal with acceleration measurements from an inertial sensor. Recently, different energy-efficient deep learning approaches for heart rate estimation have been proposed. To test these new solutions, in this work, we developed a highly wearable platform (42mm x 48 mm x 1.2mm) for PPG signal acquisition and processing, based on GAP9, a parallel ultra low power system-on-chip featuring nine cores RISC-V compute cluster with neural network accelerator and 1 core RISC-V controller. The hardware platform also integrates a commercial complete Optical Biosensing Module and an ARM-Cortex M4 microcontroller unit (MCU) with Bluetooth low-energy connectivity. To demonstrate the capabilities of the system, a deep learning-based approach for PPG-based HR estimation has been deployed. Thanks to the reduced power consumption of the digital computational platform, the total power budget is just 2.67 mW providing up to 5 days of operation (105 mAh battery).

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Objectives: To analyze the effects of low-level laser therapy (LLLT), 670 nm, with doses of 4 and 7 J/cm(2), on the repair of surgical wounds covered by occlusive dressings. Background Data: The effect of LLLT on the healing process of covered wounds is not well defined. Materials and Methods: For the histologic analysis with HE staining, 50 male Wistar rats were submitted to surgical incisions and divided into 10 groups (n=5): control; stimulated with 4 and 7 J/cm(2) daily, for 7 and 14 days, with or without occlusion. Reepithelization and the number of leukocytes, fibroblasts, and fibrocytes were obtained with an image processor. For the biomechanical analysis, 25 rats were submitted to a surgical incision and divided into five groups (n=5): treated for 14 days with and without occlusive dressing, and the sham group. Samples of the lesions were collected and submitted to the tensile test. One-way analysis of variance was performed, followed by post hoc analysis. A Tukey test was used on the biomechanical data, and the Tamhane test on the histologic data. A significance level of 5% was chosen (p <= 0.05). Results: The 4 and 7J/cm(2) laser with and without occlusive dressing did not alter significantly the reepithelization rate of the wounds. The 7 J/cm(2) laser reduced the number of leukocytes significantly. The number of fibroblasts was higher in the groups treated with laser for 7 days, and was significant in the covered 4 J/cm(2) laser group. Conclusions: Greater interference of the laser-treatment procedure was noted with 7 days of stimulation, and the occlusive dressing did not alter its biostimulatory effects.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In Natural Language Processing (NLP) symbolic systems, several linguistic phenomena, for instance, the thematic role relationships between sentence constituents, such as AGENT, PATIENT, and LOCATION, can be accounted for by the employment of a rule-based grammar. Another approach to NLP concerns the use of the connectionist model, which has the benefits of learning, generalization and fault tolerance, among others. A third option merges the two previous approaches into a hybrid one: a symbolic thematic theory is used to supply the connectionist network with initial knowledge. Inspired on neuroscience, it is proposed a symbolic-connectionist hybrid system called BIO theta PRED (BIOlogically plausible thematic (theta) symbolic-connectionist PREDictor), designed to reveal the thematic grid assigned to a sentence. Its connectionist architecture comprises, as input, a featural representation of the words (based on the verb/noun WordNet classification and on the classical semantic microfeature representation), and, as output, the thematic grid assigned to the sentence. BIO theta PRED is designed to ""predict"" thematic (semantic) roles assigned to words in a sentence context, employing biologically inspired training algorithm and architecture, and adopting a psycholinguistic view of thematic theory.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This paper presents a compact embedded fuzzy system for three-phase induction-motor scalar speed control. The control strategy consists in keeping constant the voltage-frequency ratio of the induction-motor supply source. A fuzzy-control system is built on a digital signal processor, which uses speed error and speed-error variation to change both the fundamental voltage amplitude and frequency of a sinusoidal pulsewidth modulation inverter. An alternative optimized method for embedded fuzzy-system design is also proposed. The controller performance, in relation to reference and load-torque variations, is evaluated by experimental results. A comparative analysis with conventional proportional-integral controller is also achieved.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In this and a preceding paper, we provide an introduction to the Fujitsu VPP range of vector-parallel supercomputers and to some of the computational chemistry software available for the VPP. Here, we consider the implementation and performance of seven popular chemistry application packages. The codes discussed range from classical molecular dynamics to semiempirical and ab initio quantum chemistry. All have evolved from sequential codes, and have typically been parallelised using a replicated data approach. As such they are well suited to the large-memory/fast-processor architecture of the VPP. For one code, CASTEP, a distributed-memory data-driven parallelisation scheme is presented. (C) 2000 Published by Elsevier Science B.V. All rights reserved.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Wills' Mineral Processing Technology provides practising engineers and students of mineral processing, metallurgy and mining with a review of all of the common ore-processing techniques utilized in modern processing installations. Now in its Seventh Edition, this renowned book is a standard reference for the mineral processing industry. Chapters deal with each of the major processing techniques, and coverage includes the latest technical developments in the processing of increasingly complex refractory ores, new equipment and process routes. This new edition has been prepared by the prestigious J K Minerals Research Centre of Australia, which contributes its world-class expertise and ensures that this will continue to be the book of choice for professionals and students in this field. This latest edition highlights the developments and the challenges facing the mineral processor, particularly with regard to the environmental problems posed in improving the efficiency of the existing processes and also in dealing with the waste created. The work is fully indexed and referenced.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The QU-GENE Computing Cluster (QCC) is a hardware and software solution to the automation and speedup of large QU-GENE (QUantitative GENEtics) simulation experiments that are designed to examine the properties of genetic models, particularly those that involve factorial combinations of treatment levels. QCC automates the management of the distribution of components of the simulation experiments among the networked single-processor computers to achieve the speedup.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Implementing monolithic DC-DC converters for low power portable applications with a standard low voltage CMOS technology leads to lower production costs and higher reliability. Moreover, it allows miniaturization by the integration of two units in the same die: the power management unit that regulates the supply voltage for the second unit, a dedicated signal processor, that performs the functions required. This paper presents original techniques that limit spikes in the internal supply voltage on a monolithic DC-DC converter, extending the use of the same technology for both units. These spikes are mainly caused by fast current variations in the path connecting the external power supply to the internal pads of the converter power block. This path includes two parasitic inductances inbuilt in bond wires and in package pins. Although these parasitic inductances present relative low values when compared with the typical external inductances of DC-DC converters, their effects can not be neglected when switching high currents at high switching frequency. The associated overvoltage frequently causes destruction, reliability problems and/or control malfunction. Different spike reduction techniques are presented and compared. The proposed techniques were used in the design of the gate driver of a DC-DC converter included in a power management unit implemented in a standard 0.35 mu m CMOS technology.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This paper describes an implementation of a long distance echo canceller, operating on full-duplex with hands-free and in real-time with a single Digital Signal Processor (DSP). The proposed solution is based on short length adaptive filters centered on the positions of the most significant echoes, which are tracked by time delay estimators, for which we use a new approach. To deal with double talking situations a speech detector is employed. The floating-point DSP TMS320C6713 from Texas Instruments is used with software written in C++, with compiler optimizations for fast execution. The resulting algorithm enables long distance echo cancellation with low computational requirements, suited for embbeded systems. It reaches greater echo return loss enhancement and shows faster convergence speed when compared to the conventional approach. The experimental results approach the CCITT G.165 recommendation levels.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Objective - To evaluate the effect of prepregnancy body mass index (BMI), energy and macronutrient intakes during pregnancy, and gestational weight gain (GWG) on the body composition of full-term appropriate-for-gestational age neonates. Study Design - This is a cross-sectional study of a systematically recruited convenience sample of mother-infant pairs. Food intake during pregnancy was assessed by food frequency questionnaire and its nutritional value by the Food Processor Plus (ESHA Research Inc, Salem, OR). Neonatal body composition was assessed both by anthropometry and air displacement plethysmography. Explanatory models for neonatal body composition were tested by multiple linear regression analysis. Results - A total of 100 mother-infant pairs were included. Prepregnancy overweight was positively associated with offspring weight, weight/length, BMI, and fat-free mass in the whole sample; in males, it was also positively associated with midarm circumference, ponderal index, and fat mass. Higher energy intake from carbohydrate was positively associated with midarm circumference and weight/length in the whole sample. Higher GWG was positively associated with weight, length, and midarm circumference in females. Conclusion - Positive adjusted associations were found between both prepregnancy BMI and energy intake from carbohydrate and offspring body size in the whole sample. Positive adjusted associations were also found between prepregnancy overweight and adiposity in males, and between GWG and body size in females.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Background: The effect of the intake of polynsaturated long chain fatty acids (LCPUFAs) during pregnancy on fetal body composition has been assessed by studies using mostly neonatal anthropometry. Their results have been inconsistent, probably because neonatal anthropometry has several validity limitations. Air displacement plethismography (ADP) is a recently validated non-invasive method for assessing body composition in neonates. Objective: To determine the effect of the intake of LCPUFAs during pregnancy on the body composition of term neonates, measured by ADP. Methods: Cross-sectional study of a convenience sample of healthy full-term neonates and their mothers. The diet during pregnancy was assessed using a validated semi-quantitative food frequency questionnaire; Food Processor Plus® was used to convert food intake into nutritional values. Body composition was estimated by anthropometry and measured by ADP using Pea Pod™ Life Measurements Inc (fat mass - FM, fat-free mass and %FM) within the first 72h after birth. Univariate and multivariate analysis (linear regression model) were performed. Results: 54 mother-neonate pairs were included. Multivariate analysis adjusted to the maternal body mass index shows positive association between LCPUFAs intake and neonatal mid-arm circumference (= 0,610, p = 0,019) and negative association between n-6:n-3 ratio intake and neonatal %FM (= -2,744, p=0,066). Conclusion: To the best of our knowledge, this is the first study on this subject using ADP and showing a negative association between LCPUFAs n-6:n-3 ratio intake in pregnancy and neonatal %FM. This preliminary finding requires confirmation increasing the study power with a greater sample and performing interventional studies.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Os sistemas de tempo real modernos geram, cada vez mais, cargas computacionais pesadas e dinâmicas, começando-se a tornar pouco expectável que sejam implementados em sistemas uniprocessador. Na verdade, a mudança de sistemas com um único processador para sistemas multi- processador pode ser vista, tanto no domínio geral, como no de sistemas embebidos, como uma forma eficiente, em termos energéticos, de melhorar a performance das aplicações. Simultaneamente, a proliferação das plataformas multi-processador transformaram a programação paralela num tópico de elevado interesse, levando o paralelismo dinâmico a ganhar rapidamente popularidade como um modelo de programação. A ideia, por detrás deste modelo, é encorajar os programadores a exporem todas as oportunidades de paralelismo através da simples indicação de potenciais regiões paralelas dentro das aplicações. Todas estas anotações são encaradas pelo sistema unicamente como sugestões, podendo estas serem ignoradas e substituídas, por construtores sequenciais equivalentes, pela própria linguagem. Assim, o modo como a computação é na realidade subdividida, e mapeada nos vários processadores, é da responsabilidade do compilador e do sistema computacional subjacente. Ao retirar este fardo do programador, a complexidade da programação é consideravelmente reduzida, o que normalmente se traduz num aumento de produtividade. Todavia, se o mecanismo de escalonamento subjacente não for simples e rápido, de modo a manter o overhead geral em níveis reduzidos, os benefícios da geração de um paralelismo com uma granularidade tão fina serão meramente hipotéticos. Nesta perspetiva de escalonamento, os algoritmos que empregam uma política de workstealing são cada vez mais populares, com uma eficiência comprovada em termos de tempo, espaço e necessidades de comunicação. Contudo, estes algoritmos não contemplam restrições temporais, nem outra qualquer forma de atribuição de prioridades às tarefas, o que impossibilita que sejam diretamente aplicados a sistemas de tempo real. Além disso, são tradicionalmente implementados no runtime da linguagem, criando assim um sistema de escalonamento com dois níveis, onde a previsibilidade, essencial a um sistema de tempo real, não pode ser assegurada. Nesta tese, é descrita a forma como a abordagem de work-stealing pode ser resenhada para cumprir os requisitos de tempo real, mantendo, ao mesmo tempo, os seus princípios fundamentais que tão bons resultados têm demonstrado. Muito resumidamente, a única fila de gestão de processos convencional (deque) é substituída por uma fila de deques, ordenada de forma crescente por prioridade das tarefas. De seguida, aplicamos por cima o conhecido algoritmo de escalonamento dinâmico G-EDF, misturamos as regras de ambos, e assim nasce a nossa proposta: o algoritmo de escalonamento RTWS. Tirando partido da modularidade oferecida pelo escalonador do Linux, o RTWS é adicionado como uma nova classe de escalonamento, de forma a avaliar na prática se o algoritmo proposto é viável, ou seja, se garante a eficiência e escalonabilidade desejadas. Modificar o núcleo do Linux é uma tarefa complicada, devido à complexidade das suas funções internas e às fortes interdependências entre os vários subsistemas. Não obstante, um dos objetivos desta tese era ter a certeza que o RTWS é mais do que um conceito interessante. Assim, uma parte significativa deste documento é dedicada à discussão sobre a implementação do RTWS e à exposição de situações problemáticas, muitas delas não consideradas em teoria, como é o caso do desfasamento entre vários mecanismo de sincronização. Os resultados experimentais mostram que o RTWS, em comparação com outro trabalho prático de escalonamento dinâmico de tarefas com restrições temporais, reduz significativamente o overhead de escalonamento através de um controlo de migrações, e mudanças de contexto, eficiente e escalável (pelo menos até 8 CPUs), ao mesmo tempo que alcança um bom balanceamento dinâmico da carga do sistema, até mesmo de uma forma não custosa. Contudo, durante a avaliação realizada foi detetada uma falha na implementação do RTWS, pela forma como facilmente desiste de roubar trabalho, o que origina períodos de inatividade, no CPU em questão, quando a utilização geral do sistema é baixa. Embora o trabalho realizado se tenha focado em manter o custo de escalonamento baixo e em alcançar boa localidade dos dados, a escalonabilidade do sistema nunca foi negligenciada. Na verdade, o algoritmo de escalonamento proposto provou ser bastante robusto, não falhando qualquer meta temporal nas experiências realizadas. Portanto, podemos afirmar que alguma inversão de prioridades, causada pela sub-política de roubo BAS, não compromete os objetivos de escalonabilidade, e até ajuda a reduzir a contenção nas estruturas de dados. Mesmo assim, o RTWS também suporta uma sub-política de roubo determinística: PAS. A avaliação experimental, porém, não ajudou a ter uma noção clara do impacto de uma e de outra. No entanto, de uma maneira geral, podemos concluir que o RTWS é uma solução promissora para um escalonamento eficiente de tarefas paralelas com restrições temporais.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Objective - The adjusted effect of long-chain polyunsaturated fatty acid (LCPUFA) intake during pregnancy on adiposity at birth of healthy full-term appropriate-for-gestational age neonates was evaluated. Study Design - In a cross-sectional convenience sample of 100 mother and infant dyads, LCPUFA intake during pregnancy was assessed by food frequency questionnaire with nutrient intake calculated using Food Processor Plus. Linear regression models for neonatal body composition measurements, assessed by air displacement plethysmography and anthropometry, were adjusted for maternal LCPUFA intakes, energy and macronutrient intakes, prepregnancy body mass index and gestational weight gain. Result - Positive associations between maternal docosahexaenoic acid intake and ponderal index in male offspring (β=0.165; 95% confidence interval (CI): 0.031–0.299; P=0.017), and between n-6:n-3 LCPUFA ratio intake and fat mass (β=0.021; 95% CI: 0.002–0.041; P=0.034) and percentage of fat mass (β=0.636; 95% CI: 0.125–1.147; P=0.016) in female offspring were found. Conclusion - Using a reliable validated method to assess body composition, adjusted positive associations between maternal docosahexaenoic acid intake and birth size in male offspring and between n-6:n-3 LCPUFA ratio intake and adiposity in female offspring were found, suggesting that maternal LCPUFA intake strongly influences fetal body composition.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Dissertação para obtenção do grau de Mestre em Engenharia Electrotécnica Ramo de Automação e Electrónica Industrial

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Devido ao acréscimo significativo de viaturas e peões nas grandes cidades foi necessário recorrer aos mecanismos existentes para coordenar o tráfego. Nesta perspectiva surge a implementação de semáforos com o objectivo de ordenar o tráfego nas vias rodoviárias. A gestão de tráfego, tem sido sujeita a inovações tanto ao nível dos equipamentos, do software usado, gestão centralizada, monitorização das vias e na sincronização semafórica, sendo possível a criação de programas ajustados às diferentes exigências de tráfego verificadas durante as vinte e quatro horas para pontos distintos da cidade. Conceptualmente foram elaborados estudos, com o objectivo de identificar a relação entre a velocidade o fluxo e o intervalo num determinado intervalo de tempo, bem como a relação entre a velocidade e a sinistralidade. Até 1995 Portugal era um dos países com maior número de sinistros rodoviários Na sequência desta evolução foram instalados radares de controlo de velocidade no final de 2006 com o objectivo de obrigar ao cumprimento dos limites de velocidade impostos pelo código da estrada e reduzir a sinistralidade automóvel na cidade de Lisboa. Passados alguns anos sobre o investimento realizadoanteriormente, constatamos que existe a necessidade de implementar novas tecnologias na detecção das infracções, sejam estas de excesso de velocidade ou violação do semáforo vermelho (VSV), optimizar a informação disponibilizada aos automobilistas e aos peões, coordenar a interacção entre os veículos prioritários e os restantes presentes na via, dinamizar a gestão interna das contra ordenações, agilizar os procedimentos informatizar a recolha deinformação de modo a tornar os processos mais céleres.