27 resultados para Armer, Chip


Relevância:

10.00% 10.00%

Publicador:

Resumo:

Dependability is a critical factor in computer systems, requiring high quality validation & verification procedures in the development stage. At the same time, digital devices are getting smaller and access to their internal signals and registers is increasingly complex, requiring innovative debugging methodologies. To address this issue, most recent microprocessors include an on-chip debug (OCD) infrastructure to facilitate common debugging operations. This paper proposes an enhanced OCD infrastructure with the objective of supporting the verification of fault-tolerant mechanisms through fault injection campaigns. This upgraded on-chip debug and fault injection (OCD-FI) infrastructure provides an efficient fault injection mechanism with improved capabilities and dynamic behavior. Preliminary results show that this solution provides flexibility in terms of fault triggering and allows high speed real-time fault injection in memory elements

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Fault injection is frequently used for the verification and validation of dependable systems. When targeting real time microprocessor based systems the process becomes significantly more complex. This paper proposes two complementary solutions to improve real time fault injection campaign execution, both in terms of performance and capabilities. The methodology is based on the use of the on-chip debug mechanisms present in modern electronic devices. The main objective is the injection of faults in microprocessor memory elements with minimum delay and intrusiveness. Different configurations were implemented and compared in terms of performance gain and logic overhead.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

As electronic devices get smaller and more complex, dependability assurance is becoming fundamental for many mission critical computer based systems. This paper presents a case study on the possibility of using the on-chip debug infrastructures present in most current microprocessors to execute real time fault injection campaigns. The proposed methodology is based on a debugger customized for fault injection and designed for maximum flexibility, and consists of injecting bit-flip type faults on memory elements without modifying or halting the target application. The debugger design is easily portable and applicable to different architectures, providing a flexible and efficient mechanism for verifying and validating fault tolerant components.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Fault injection is frequently used for the verification and validation of the fault tolerant features of microprocessors. This paper proposes the modification of a common on-chip debugging (OCD) infrastructure to add fault injection capabilities and improve performance. The proposed solution imposes a very low logic overhead and provides a flexible and efficient mechanism for the execution of fault injection campaigns, being applicable to different target system architectures.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

“Many-core” systems based on a Network-on-Chip (NoC) architecture offer various opportunities in terms of performance and computing capabilities, but at the same time they pose many challenges for the deployment of real-time systems, which must fulfill specific timing requirements at runtime. It is therefore essential to identify, at design time, the parameters that have an impact on the execution time of the tasks deployed on these systems and the upper bounds on the other key parameters. The focus of this work is to determine an upper bound on the traversal time of a packet when it is transmitted over the NoC infrastructure. Towards this aim, we first identify and explore some limitations in the existing recursive-calculus-based approaches to compute the Worst-Case Traversal Time (WCTT) of a packet. Then, we extend the existing model by integrating the characteristics of the tasks that generate the packets. For this extended model, we propose an algorithm called “Branch and Prune” (BP). Our proposed method provides tighter and safe estimates than the existing recursive-calculus-based approaches. Finally, we introduce a more general approach, namely “Branch, Prune and Collapse” (BPC) which offers a configurable parameter that provides a flexible trade-off between the computational complexity and the tightness of the computed estimate. The recursive-calculus methods and BP present two special cases of BPC when a trade-off parameter is 1 or ∞, respectively. Through simulations, we analyze this trade-off, reason about the implications of certain choices, and also provide some case studies to observe the impact of task parameters on the WCTT estimates.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Nowadays the incredible grow of mobile devices market led to the need for location-aware applications. However, sometimes person location is difficult to obtain, since most of these devices only have a GPS (Global Positioning System) chip to retrieve location. In order to suppress this limitation and to provide location everywhere (even where a structured environment doesn’t exist) a wearable inertial navigation system is proposed, which is a convenient way to track people in situations where other localization systems fail. The system combines pedestrian dead reckoning with GPS, using widely available, low-cost and low-power hardware components. The system innovation is the information fusion and the use of probabilistic methods to learn persons gait behavior to correct, in real-time, the drift errors given by the sensors.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Nowadays there is an increase of location-aware mobile applications. However, these applications only retrieve location with a mobile device's GPS chip. This means that in indoor or in more dense environments these applications don't work properly. To provide location information everywhere a pedestrian Inertial Navigation System (INS) is typically used, but these systems can have a large estimation error since, in order to turn the system wearable, they use low-cost and low-power sensors. In this work a pedestrian INS is proposed, where force sensors were included to combine with the accelerometer data in order to have a better detection of the stance phase of the human gait cycle, which leads to improvements in location estimation. Besides sensor fusion an information fusion architecture is proposed, based on the information from GPS and several inertial units placed on the pedestrian body, that will be used to learn the pedestrian gait behavior to correct, in real-time, the inertial sensors errors, thus improving location estimation.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

As the wireless cellular market reaches competitive levels never seen before, network operators need to focus on maintaining Quality of Service (QoS) a main priority if they wish to attract new subscribers while keeping existing customers satisfied. Speech Quality as perceived by the end user is one major example of a characteristic in constant need of maintenance and improvement. It is in this topic that this Master Thesis project fits in. Making use of an intrusive method of speech quality evaluation, as a means to further study and characterize the performance of speech codecs in second-generation (2G) and third-generation (3G) technologies. Trying to find further correlation between codecs with similar bit rates, along with the exploration of certain transmission parameters which may aid in the assessment of speech quality. Due to some limitations concerning the audio analyzer equipment that was to be employed, a different system for recording the test samples was sought out. Although the new designed system is not standard, after extensive testing and optimization of the system's parameters, final results were found reliable and satisfactory. Tests include a set of high and low bit rate codecs for both 2G and 3G, where values were compared and analysed, leading to the outcome that 3G speech codecs perform better, under the approximately same conditions, when compared with 2G. Reinforcing the idea that 3G is, with no doubt, the best choice if the costumer looks for the best possible listening speech quality. Regarding the transmission parameters chosen for the experiment, the Receiver Quality (RxQual) and Received Energy per Chip to the Power Density Ratio (Ec/N0), these were subject to speech quality correlation tests. Final results of RxQual were compared to those of prior studies from different researchers and, are considered to be of important relevance. Leading to the confirmation of RxQual as a reliable indicator of speech quality. As for Ec/N0, it is not possible to state it as a speech quality indicator however, it shows clear thresholds for which the MOS values decrease significantly. The studied transmission parameters show that they can be used not only for network management purposes but, at the same time, give an expected idea to the communications engineer (or technician) of the end-to-end speech quality consequences. With the conclusion of the work new ideas for future studies come to mind. Considering that the fourth-generation (4G) cellular technologies are now beginning to take an important place in the global market, as the first all-IP network structure, it seems of great relevance that 4G speech quality should be subject of evaluation. Comparing it to 3G, not only in narrowband but also adding wideband scenarios with the most recent standard objective method of speech quality assessment, POLQA. Also, new data found on Ec/N0 tests, justifies further research studies with the intention of validating the assumptions made in this work.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Prostate cancer (PCa) is one of the most incident cancers worldwide but clinical and pathological parameters have limited ability to discriminate between clinically significant and indolent PCa. Altered expression of histone methyltransferases and histone methylation patterns are involved in prostate carcinogenesis. SMYD3 transcript levels have prognostic value and discriminate among PCa with different clinical aggressiveness, so we decided to investigate its putative oncogenic role on PCa.We silenced SMYD3 and assess its impact through in vitro (cell viability, cell cycle, apoptosis, migration, invasion assays) and in vivo (tumor formation, angiogenesis). We evaluated SET domain's impact in PCa cells' phenotype. Histone marks deposition on SMYD3 putative target genes was assessed by ChIP analysis.Knockdown of SMYD3 attenuated malignant phenotype of LNCaP and PC3 cell lines. Deletions affecting the SET domain showed phenotypic impact similar to SMYD3 silencing, suggesting that tumorigenic effect is mediated through its histone methyltransferase activity. Moreover, CCND2 was identified as a putative target gene for SMYD3 transcriptional regulation, through trimethylation of H4K20.Our results support a proto-oncogenic role for SMYD3 in prostate carcinogenesis, mainly due to its methyltransferase enzymatic activity. Thus, SMYD3 overexpression is a potential biomarker for clinically aggressive disease and an attractive therapeutic target in PCa.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

A indústria de semicondutores é um sector em permanente evolução tecnológica. A tendência de miniaturização e de otimização do espaço, a necessidade de produzir circuitos cada vez mais complexos, a tendência para o incremento do número de camadas em cada circuito integrado, são as condições necessárias para que a evolução tecnológica nesta área seja uma constante. Os processos ligados à produção de semicondutores estão também em permanente evolução, dada a pressão efetuada pelas necessidades acima expostas. Os equipamentos necessitam de uma crescente precisão, a qual tem que ser acompanhada de procedimentos rigorosos para que a qualidade atingida tenha sempre o patamar desejado. No entanto, a constante evolução nem sempre permite um adequado levantamento de todas as causas que estão na origem de alguns problemas detetados na fabricação de semicondutores. Este trabalho teve por objetivo efetuar um levantamento dos processos ligados ao fabrico de semicondutores a partir de uma pastilha de silício (wafer) previamente realizada, identificando para cada processo os possíveis defeitos introduzidos pelo mesmo, procurando inventariar as causas possíveis que possam estar na origem desse defeito e realizar procedimentos que permitam criar regras e procedimentos perfeitamente estabelecidos que permitam aprender com os erros e evitar que os mesmos problemas se possam vir a repetir em situações análogas em outros produtos de uma mesma família.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

A crescente evolução dos dispositivos contendo circuitos integrados, em especial os FPGAs (Field Programmable Logic Arrays) e atualmente os System on a chip (SoCs) baseados em FPGAs, juntamente com a evolução das ferramentas, tem deixado um espaço entre o lançamento e a produção de materiais didáticos que auxiliem os engenheiros no Co- Projecto de hardware/software a partir dessas tecnologias. Com o intuito de auxiliar na redução desse intervalo temporal, o presente trabalho apresenta o desenvolvimento de documentos (tutoriais) direcionados a duas tecnologias recentes: a ferramenta de desenvolvimento de hardware/software VIVADO; e o SoC Zynq-7000, Z-7010, ambos desenvolvidos pela Xilinx. Os documentos produzidos são baseados num projeto básico totalmente implementado em lógica programável e do mesmo projeto implementado através do processador programável embarcado, para que seja possível avaliar o fluxo de projeto da ferramenta para um projeto totalmente implementado em hardware e o fluxo de projeto para o mesmo projeto implementado numa estrutura de harware/software.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Nos dias de hoje, os sistemas de tempo real crescem em importância e complexidade. Mediante a passagem do ambiente uniprocessador para multiprocessador, o trabalho realizado no primeiro não é completamente aplicável no segundo, dado que o nível de complexidade difere, principalmente devido à existência de múltiplos processadores no sistema. Cedo percebeu-se, que a complexidade do problema não cresce linearmente com a adição destes. Na verdade, esta complexidade apresenta-se como uma barreira ao avanço científico nesta área que, para já, se mantém desconhecida, e isto testemunha-se, essencialmente no caso de escalonamento de tarefas. A passagem para este novo ambiente, quer se trate de sistemas de tempo real ou não, promete gerar a oportunidade de realizar trabalho que no primeiro caso nunca seria possível, criando assim, novas garantias de desempenho, menos gastos monetários e menores consumos de energia. Este último fator, apresentou-se desde cedo, como, talvez, a maior barreira de desenvolvimento de novos processadores na área uniprocessador, dado que, à medida que novos eram lançados para o mercado, ao mesmo tempo que ofereciam maior performance, foram levando ao conhecimento de um limite de geração de calor que obrigou ao surgimento da área multiprocessador. No futuro, espera-se que o número de processadores num determinado chip venha a aumentar, e como é óbvio, novas técnicas de exploração das suas inerentes vantagens têm de ser desenvolvidas, e a área relacionada com os algoritmos de escalonamento não é exceção. Ao longo dos anos, diferentes categorias de algoritmos multiprocessador para dar resposta a este problema têm vindo a ser desenvolvidos, destacando-se principalmente estes: globais, particionados e semi-particionados. A perspectiva global, supõe a existência de uma fila global que é acessível por todos os processadores disponíveis. Este fato torna disponível a migração de tarefas, isto é, é possível parar a execução de uma tarefa e resumir a sua execução num processador distinto. Num dado instante, num grupo de tarefas, m, as tarefas de maior prioridade são selecionadas para execução. Este tipo promete limites de utilização altos, a custo elevado de preempções/migrações de tarefas. Em contraste, os algoritmos particionados, colocam as tarefas em partições, e estas, são atribuídas a um dos processadores disponíveis, isto é, para cada processador, é atribuída uma partição. Por essa razão, a migração de tarefas não é possível, acabando por fazer com que o limite de utilização não seja tão alto quando comparado com o caso anterior, mas o número de preempções de tarefas decresce significativamente. O esquema semi-particionado, é uma resposta de caráter hibrido entre os casos anteriores, pois existem tarefas que são particionadas, para serem executadas exclusivamente por um grupo de processadores, e outras que são atribuídas a apenas um processador. Com isto, resulta uma solução que é capaz de distribuir o trabalho a ser realizado de uma forma mais eficiente e balanceada. Infelizmente, para todos estes casos, existe uma discrepância entre a teoria e a prática, pois acaba-se por se assumir conceitos que não são aplicáveis na vida real. Para dar resposta a este problema, é necessário implementar estes algoritmos de escalonamento em sistemas operativos reais e averiguar a sua aplicabilidade, para caso isso não aconteça, as alterações necessárias sejam feitas, quer a nível teórico quer a nível prá