989 resultados para Memory Tests


Relevância:

20.00% 20.00%

Publicador:

Resumo:

Actualmente verifica-se que a complexidade dos sistemas informáticos tem vindo a aumentar, fazendo parte das nossas ferramentas diárias de trabalho a utilização de sistemas informáticos e a utilização de serviços online. Neste âmbito, a internet obtém um papel de destaque junto das universidades, ao permitir que alunos e professores possam interagir mais facilmente. A internet e a educação baseada na Web vêm oferecer acesso remoto a qualquer informação independentemente da localização ou da hora. Como consequência, qualquer pessoa com uma ligação à internet, ao poder adquirir informações sobre um determinado tema junto dos maiores peritos, obtém vantagens significativas. Os laboratórios remotos são uma solução muito valorizada no que toca a interligar tecnologia e recursos humanos em ambientes que podem estar afastados no tempo ou no espaço. A criação deste tipo de laboratórios e a sua utilidade real só é possível porque as tecnologias de comunicação emergentes têm contribuído de uma forma muito relevante para melhorar a sua disponibilização à distância. A necessidade de criação de laboratórios remotos torna-se imprescindível para pesquisas relacionadas com engenharia que envolvam a utilização de recursos escassos ou de grandes dimensões. Apoiado neste conceito, desenvolveu-se um laboratório remoto para os alunos de engenharia que precisam de testar circuitos digitais numa carta de desenvolvimento de hardware configurável, permitindo a utilização deste recurso de uma forma mais eficiente. O trabalho consistiu na criação de um laboratório remoto de baixo custo, com base em linguagens de programação open source, sendo utilizado como unidade de processamento um router da ASUS com o firmware OpenWrt. Este firmware é uma distribuição Linux para sistemas embutidos. Este laboratório remoto permite o teste dos circuitos digitais numa carta de desenvolvimento de hardware configurável em tempo real, utilizando a interface JTAG. O laboratório desenvolvido tem a particularidade de ter como unidade de processamento um router. A utilização do router como servidor é uma solução muito pouco usual na implementação de laboratórios remotos. Este router, quando comparado com um computador normal, apresenta uma capacidade de processamento e memória muito inferior, embora os testes efectuados provassem que apresenta um desempenho muito adequado às expectativas.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The treatment efficiency of laboratory wastewaters was evaluated and ecotoxicity tests with Chlorella vulgaris were performed on them to assess the safety of their environmental discharge. For chemical oxygen demand wastewaters, chromium (VI), mercury (II) and silver were efficiently removedby chemical treatments.Areduction of ecotoxicitywas achieved; nevertheless, an EC50 (effective concentration that causes a 50% inhibition in the algae growth) of 1.5% (v/v) indicated still high level of ecotoxicity. For chloride determination wastewaters, an efficient reduction of chromium and silver was achieved after treatment. Regarding the reduction of ecotoxicity observed, EC50 increased from 0.059% to 0.5%, only a 0.02% concentration in the aquatic environment would guarantee no effects. Wastewaters containing phenanthroline/iron (II) complex were treated by chemical oxidation. Treatmentwas satisfactory concerning chemical parameters, although an increase in ecotoxicitywas observed (EC50 reduced from 0.31% to 0.21%). The wastes from the kinetic study of persulphate and iodide reaction were treated with sodium bisulphite until colour was removed. Although they did not reveal significant ecotoxicity, only over 1% of the untreated waste produced observable effects over algae. Therefore, ecotoxicity tests could be considered a useful tool not only in laboratory effluents treatment, as shown, but also in hazardous wastewaters management.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Pesticide exposure during brain development could represent an important risk factor for the onset of neurodegenerative diseases. Previous studies investigated the effect of permethrin (PERM) administered at 34 mg/kg, a dose close to the no observable adverse effect level (NOAEL) from post natal day (PND) 6 to PND 21 in rats. Despite the PERM dose did not elicited overt signs of toxicity (i.e. normal body weight gain curve), it was able to induce striatal neurodegeneration (dopamine and Nurr1 reduction, and lipid peroxidation increase). The present study was designed to characterize the cognitive deficits in the current animal model. When during late adulthood PERM treated rats were tested for spatial working memory performances in a T-maze-rewarded alternation task they took longer to choose for the correct arm in comparison to age matched controls. No differences between groups were found in anxiety-like state, locomotor activity, feeding behavior and spatial orientation task. Our findings showing a selective effect of PERM treatment on the T-maze task point to an involvement of frontal cortico-striatal circuitry rather than to a role for the hippocampus. The predominant disturbances concern the dopamine (DA) depletion in the striatum and, the serotonin (5-HT) and noradrenaline (NE) unbalance together with a hypometabolic state in the medial prefrontal cortex area. In the hippocampus, an increase of NE and a decrease of DA were observed in PERM treated rats as compared to controls. The concentration of the most representative marker for pyrethroid exposure (3-phenoxybenzoic acid) measured in the urine of rodents 12 h after the last treatment was 41.50 µ/L and it was completely eliminated after 96 h.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The recent trends of chip architectures with higher number of heterogeneous cores, and non-uniform memory/non-coherent caches, brings renewed attention to the use of Software Transactional Memory (STM) as a fundamental building block for developing parallel applications. Nevertheless, although STM promises to ease concurrent and parallel software development, it relies on the possibility of aborting conflicting transactions to maintain data consistency, which impacts on the responsiveness and timing guarantees required by embedded real-time systems. In these systems, contention delays must be (efficiently) limited so that the response times of tasks executing transactions are upper-bounded and task sets can be feasibly scheduled. In this paper we assess the use of STM in the development of embedded real-time software, defending that the amount of contention can be reduced if read-only transactions access recent consistent data snapshots, progressing in a wait-free manner. We show how the required number of versions of a shared object can be calculated for a set of tasks. We also outline an algorithm to manage conflicts between update transactions that prevents starvation.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The usage of COTS-based multicores is becoming widespread in the field of embedded systems. Providing realtime guarantees at design-time is a pre-requisite to deploy real-time systems on these multicores. This necessitates the consideration of the impact of the contention due to shared low-level hardware resources on the Worst-Case Execution Time (WCET) of the tasks. As a step towards this aim, this paper first identifies the different factors that make the WCET analysis a challenging problem in a typical COTS-based multicore system. Then, we propose and prove, a mathematically correct method to determine tight upper bounds on the WCET of the tasks, when they are co-scheduled on different cores.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The current industry trend is towards using Commercially available Off-The-Shelf (COTS) based multicores for developing real time embedded systems, as opposed to the usage of custom-made hardware. In typical implementation of such COTS-based multicores, multiple cores access the main memory via a shared bus. This often leads to contention on this shared channel, which results in an increase of the response time of the tasks. Analyzing this increased response time, considering the contention on the shared bus, is challenging on COTS-based systems mainly because bus arbitration protocols are often undocumented and the exact instants at which the shared bus is accessed by tasks are not explicitly controlled by the operating system scheduler; they are instead a result of cache misses. This paper makes three contributions towards analyzing tasks scheduled on COTS-based multicores. Firstly, we describe a method to model the memory access patterns of a task. Secondly, we apply this model to analyze the worst case response time for a set of tasks. Although the required parameters to obtain the request profile can be obtained by static analysis, we provide an alternative method to experimentally obtain them by using performance monitoring counters (PMCs). We also compare our work against an existing approach and show that our approach outperforms it by providing tighter upper-bound on the number of bus requests generated by a task.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Não existe uma definição única de processo de memória de longo prazo. Esse processo é geralmente definido como uma série que possui um correlograma decaindo lentamente ou um espectro infinito de frequência zero. Também se refere que uma série com tal propriedade é caracterizada pela dependência a longo prazo e por não periódicos ciclos longos, ou que essa característica descreve a estrutura de correlação de uma série de longos desfasamentos ou que é convencionalmente expressa em termos do declínio da lei-potência da função auto-covariância. O interesse crescente da investigação internacional no aprofundamento do tema é justificado pela procura de um melhor entendimento da natureza dinâmica das séries temporais dos preços dos ativos financeiros. Em primeiro lugar, a falta de consistência entre os resultados reclama novos estudos e a utilização de várias metodologias complementares. Em segundo lugar, a confirmação de processos de memória longa tem implicações relevantes ao nível da (1) modelação teórica e econométrica (i.e., dos modelos martingale de preços e das regras técnicas de negociação), (2) dos testes estatísticos aos modelos de equilíbrio e avaliação, (3) das decisões ótimas de consumo / poupança e de portefólio e (4) da medição de eficiência e racionalidade. Em terceiro lugar, ainda permanecem questões científicas empíricas sobre a identificação do modelo geral teórico de mercado mais adequado para modelar a difusão das séries. Em quarto lugar, aos reguladores e gestores de risco importa saber se existem mercados persistentes e, por isso, ineficientes, que, portanto, possam produzir retornos anormais. O objetivo do trabalho de investigação da dissertação é duplo. Por um lado, pretende proporcionar conhecimento adicional para o debate da memória de longo prazo, debruçando-se sobre o comportamento das séries diárias de retornos dos principais índices acionistas da EURONEXT. Por outro lado, pretende contribuir para o aperfeiçoamento do capital asset pricing model CAPM, considerando uma medida de risco alternativa capaz de ultrapassar os constrangimentos da hipótese de mercado eficiente EMH na presença de séries financeiras com processos sem incrementos independentes e identicamente distribuídos (i.i.d.). O estudo empírico indica a possibilidade de utilização alternativa das obrigações do tesouro (OT’s) com maturidade de longo prazo no cálculo dos retornos do mercado, dado que o seu comportamento nos mercados de dívida soberana reflete a confiança dos investidores nas condições financeiras dos Estados e mede a forma como avaliam as respetiva economias com base no desempenho da generalidade dos seus ativos. Embora o modelo de difusão de preços definido pelo movimento Browniano geométrico gBm alegue proporcionar um bom ajustamento das séries temporais financeiras, os seus pressupostos de normalidade, estacionariedade e independência das inovações residuais são adulterados pelos dados empíricos analisados. Por isso, na procura de evidências sobre a propriedade de memória longa nos mercados recorre-se à rescaled-range analysis R/S e à detrended fluctuation analysis DFA, sob abordagem do movimento Browniano fracionário fBm, para estimar o expoente Hurst H em relação às séries de dados completas e para calcular o expoente Hurst “local” H t em janelas móveis. Complementarmente, são realizados testes estatísticos de hipóteses através do rescaled-range tests R/S , do modified rescaled-range test M - R/S e do fractional differencing test GPH. Em termos de uma conclusão única a partir de todos os métodos sobre a natureza da dependência para o mercado acionista em geral, os resultados empíricos são inconclusivos. Isso quer dizer que o grau de memória de longo prazo e, assim, qualquer classificação, depende de cada mercado particular. No entanto, os resultados gerais maioritariamente positivos suportam a presença de memória longa, sob a forma de persistência, nos retornos acionistas da Bélgica, Holanda e Portugal. Isto sugere que estes mercados estão mais sujeitos a maior previsibilidade (“efeito José”), mas também a tendências que podem ser inesperadamente interrompidas por descontinuidades (“efeito Noé”), e, por isso, tendem a ser mais arriscados para negociar. Apesar da evidência de dinâmica fractal ter suporte estatístico fraco, em sintonia com a maior parte dos estudos internacionais, refuta a hipótese de passeio aleatório com incrementos i.i.d., que é a base da EMH na sua forma fraca. Atendendo a isso, propõem-se contributos para aperfeiçoamento do CAPM, através da proposta de uma nova fractal capital market line FCML e de uma nova fractal security market line FSML. A nova proposta sugere que o elemento de risco (para o mercado e para um ativo) seja dado pelo expoente H de Hurst para desfasamentos de longo prazo dos retornos acionistas. O expoente H mede o grau de memória de longo prazo nos índices acionistas, quer quando as séries de retornos seguem um processo i.i.d. não correlacionado, descrito pelo gBm(em que H = 0,5 , confirmando- se a EMH e adequando-se o CAPM), quer quando seguem um processo com dependência estatística, descrito pelo fBm(em que H é diferente de 0,5, rejeitando-se a EMH e desadequando-se o CAPM). A vantagem da FCML e da FSML é que a medida de memória de longo prazo, definida por H, é a referência adequada para traduzir o risco em modelos que possam ser aplicados a séries de dados que sigam processos i.i.d. e processos com dependência não linear. Então, estas formulações contemplam a EMH como um caso particular possível.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Contention on the memory bus in COTS based multicore systems is becoming a major determining factor of the execution time of a task. Analyzing this extra execution time is non-trivial because (i) bus arbitration protocols in such systems are often undocumented and (ii) the times when the memory bus is requested to be used are not explicitly controlled by the operating system scheduler; they are instead a result of cache misses. We present a method for finding an upper bound on the extra execution time of a task due to contention on the memory bus in COTS based multicore systems. This method makes no assumptions on the bus arbitration protocol (other than assuming that it is work-conserving).

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The foreseen evolution of chip architectures to higher number of, heterogeneous, cores, with non-uniform memory and non-coherent caches, brings renewed attention to the use of Software Transactional Memory (STM) as an alternative to lock-based synchronisation. However, STM relies on the possibility of aborting conflicting transactions to maintain data consistency, which impacts on the responsiveness and timing guarantees required by real-time systems. In these systems, contention delays must be (efficiently) limited so that the response times of tasks executing transactions are upperbounded and task sets can be feasibly scheduled. In this paper we defend the role of the transaction contention manager to reduce the number of transaction retries and to help the real-time scheduler assuring schedulability. For such purpose, the contention management policy should be aware of on-line scheduling information.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Shape Memory Alloy (SMA) Ni-Ti films have attracted much interest as functional and smart materials due to their unique properties. However, there are still important issues unresolved like formation of film texture and its control as well as substrate effects. Thus, the main challenge is not only the control of the microstructure, including stoichiometry and precipitates, but also the identification and control of the preferential orientation since it is a crucial factor in determining the shape memory behaviour. The aim of this PhD thesis is to study the optimisation of the deposition conditions of films of Ni-Ti in order to obtain the material fully crystallized at the end of the deposition, and to establish a clear relationship between the substrates and texture development. In order to achieve this objective, a two-magnetron sputter deposition chamber has been used allowing to heat and to apply a bias voltage to the substrate. It can be mounted into the six-circle diffractometer of the Rossendorf Beamline (ROBL) at the European Synchrotron Radiation Facility (ESRF), Grenoble, France, enabling an in-situ characterization by X-ray diffraction(XRD) of the films during their growth and annealing. The in-situ studies enable us to identify the different steps of the structural evolution during deposition with a set of parameters as well as to evaluate the effect of changing parameters on the structural characteristics of the deposited film. Besides the in-situ studies, other complementary ex-situ characterization techniques such as XRD at a laboratory source, Rutherford backscattering spectroscopy(RBS), Auger electron spectroscopy (AES), cross-sectional transmission electron microscopy (X-TEM), scanning electron microscopy (SEM), and electrical resistivity (ER) measurements during temperature cycling have been used for a fine structural characterization. In this study, mainly naturally and thermally oxidized Si(100) substrates, TiN buffer layers with different thicknesses (i.e. the TiN topmost layer crystallographic orientation is thickness dependent) and MgO(100) single crystals were used as substrates. The chosen experimental procedure led to a controlled composition and preferential orientation of the films. The type of substrate plays an important role for the texture of the sputtered Ni-Ti films and according to the ER results, the distinct crystallographic orientations of the Ni-Ti films influence their phase transformation characteristics.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This paper presents a layered Smart Grid architecture enhancing security and reliability, having the ability to act in order to maintain and correct infrastructure components without affecting the client service. The architecture presented is based in the core of well design software engineering, standing upon standards developed over the years. The layered Smart Grid offers a base tool to ease new standards and energy policies implementation. The ZigBee technology implementation test methodology for the Smart Grid is presented, and provides field tests using ZigBee technology to control the new Smart Grid architecture approach. (C) 2014 Elsevier Ltd. All rights reserved.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Micro-abrasion wear tests with ball-cratering configuration are widely used. Sources of variability are already studied by different authors and conditions for testing are parameterized by BS EN 1071-6: 2007 standard which refers silicon carbide as abrasive. However, the use of other abrasives is possible and allowed. In this work, ball-cratering wear tests were performed using four different abrasive particles of three dissimilar materials: diamond, alumina and silicon carbide. Tests were carried out under the same conditions on a steel plate provided with TiB2 hard coating. For each abrasive, five different test durations were used allowing understanding the initial wear phenomena. Composition and shape of abrasive particles were investigated by SEM and EDS. Scar areas were observed by optical and electronic microscopy in order to understand the wear effects caused by each of them. Scar geometry and grooves were analyzed and compared. Wear coefficient was calculated for each situation. It was observed that diamond particles produce well-defined and circular wear scars. Different silicon carbide particles presented dissimilar results as consequence of distinct particle shape and size distribution.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Dissertação apresentada na Faculdade de Ciências e Tecnologia da Universidade Nova de Lisboa para a obtenção do Grau de Mestre em Engenharia Informática

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Nanocrystalline diamond (NCD) coatings offer an excellent alternative for tribological applications, preserving most of the intrinsic mechanical properties of polycrystalline CVD diamond and adding to it an extreme surface smoothness. Silicon nitride (Si3N4) ceramics are reported to guarantee high adhesion levels to CVD microcrystalline diamond coatings, but the NCD adhesion to Si3N4 is not yet well established. Micro-abrasion tests are appropriate for evaluating the abrasive wear resistance of a given surface, but they also provide information on thin film/substrate interfacial resistance, i.e., film adhesion. In this study, a comparison is made between the behaviour of NCD films deposited by hot-filament chemical vapour deposition (HFCVD) and microwave plasma assisted chemical vapour deposition (MPCVD) techniques. Silicon nitride (Si3N4) ceramic discs were selected as substrates. The NCD depositions by HFCVD and MPCVD were carried out using H2–CH4 and H2–CH4–N2 gas mixtures, respectively. An adequate set of growth parameters was chosen for each CVD technique, resulting in NCD films having a final thickness of 5 m. A micro-abrasion tribometer was used, with 3 m diamond grit as the abrasive slurry element. Experiments were carried out at a constant rotational speed (80 r.p.m.) and by varying the applied load in the range of 0.25–0.75 N. The wear rate for MPCVD NCD (3.7±0.8 × 10−5 m3N−1m−1) is compatible with those reported for microcrystalline CVD diamond. The HFCVD films displayed poorer adhesion to the Si3N4 ceramic substrates than the MPCVD ones. However, the HFCVD films show better wear resistance as a result of their higher crystallinity according to the UV Raman data, despite evidencing premature adhesion failure.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Ball rotating micro-abrasion tribometers are commonly used to carry out wear tests on thin hard coatings. In these tests, different kinds of abrasives were used, as alumina (Al2O3), silicon carbide (SiC) or diamond. In each kind of abrasive, several particle sizes can be used. Some studies were developed in order to evaluate the influence of the abrasive particle shape in the micro-abrasion process. Nevertheless, the particle size was not well correlated with the material removed amount and wear mechanisms. In this work, slurry of SiC abrasive in distilled water was used, with three different particles size. Initial surface topography was accessed by atomic force microscopy (AFM). Coating hardness measurements were performed with a micro-hardness tester. In order to evaluate the wear behaviour, a TiAlSiN thin hard film was used. The micro-abrasion tests were carried out with some different durations. The abrasive effect of the SiC particles was observed by scanning electron microscopy (SEM) both in the films (hard material) as in the substrate (soft material), after coating perforation. Wear grooves and removed material rate were compared and discussed.