851 resultados para cache de radiance
Resumo:
O presente estudo visou comparar in vitro o atrito produzido por braquetes convencionais metálicos e estéticos com canaleta metálica e vítrea quando inseridos fios de secções redonda e retangular de diferentes dimensões, simulando situações sem deslocamento, com deslocamento de 2 mm e com simulação do efeito binding 3º. Foram utilizados 125 braquetes de 5 marcas comerciais (Roth Standard, Composite, Elation, Invu e Radiance), sendo 25 braquetes para cada corpo de prova. Para os ensaios laboratoriais, foram colados 5 braquetes simulando uma hemi-arcada superior direita (incisivos central e lateral, canino, primeiro e segundo premolares) em um dispositivo para posicionamento dos braquetes, sendo este acoplado à máquina universal de ensaios EMIC DL2000. Foram empregados os fios 0,016 , 0,018 e 0,017 x 0,025 NiTi para realização dos ensaios sem deslocamento e com deslocamento de 2 mm e os fios de secção 0,017 x 0,025 , 0,019 x 0,025 e 0,021 x 0,025 CrNi para realização dos ensaios com angulação zero grau e 3º. Para a comparação entre os braquetes nos diferentes fios e angulações, foi utilizada a Análise de Variância e o teste de Tukey (p<0,05). Os resultados demonstraram que nos ensaios sem deslocamento o braquete estético de policarbonato Composite apresentou menor atrito em todos os fios avaliados, enquanto o maior atrito em todas as combinações realizadas foi observado no braquete estético cerâmico monocristalino Radiance em relação aos demais braquetes. Nos ensaios com deslocamento de 2 mm e simulação do efeito binding 3º, os resultados foram semelhantes aos observados nos ensaios sem deslocamento. Contudo, houve uma diferença estatisticamente significante entre os cinco corpos de prova, sendo o atrito verificado, respectivamente e de forma crescente, nos braquetes Composite, Roth Standard, Elation, Invu e Radiance. Pôde-se concluir que a resistência friccional teve influência da composição dos braquetes, diâmetro dos fios e tipo de ensaio realizado (deslocamento de 2 mm e angulação de 3º). Além disso, a inserção da canaleta metálica no braquete estético de policarbonato Elation reduziu de forma estatisticamente significante o atrito, porém esse foi maior do que o gerado por um braquete convencional metálico e, por fim, a incorporação da canaleta vítrea no braquete estético cerâmico policristalino Invu ofereceu uma maior lisura de superfície, reduzindo as irregularidades e imperfeições presentes na canaleta o que levou, consequentemente, a uma redução estatisticamente significante no atrito, o que demonstra que a modificação em sua canaleta favoreceu o deslocamento e reduziu de forma efetiva essa resistência à fricção.
Resumo:
Chorismate mutase is one of the essential enzymes in the shikimate pathway and is key to the survival of the organism Mycobacterium tuberculosis. The x-ray crystal structure of this enzyme from Mycobacterium tuberculosis was manipulated to prepare an initial set of in silico protein models of the active site. Known inhibitors of the enzyme were docked into the active site using the flexible ligand / flexible active site side chains approach implemented in CAChe Worksystem (Fujitsu Ltd). The resulting complexes were refined by molecular dynamics studies in explicit water using Amber 9. This yielded a further set of protein models that were used for additional rounds of ligand docking. A binding hypothesis was established for the enzyme and this was used to screen a database of commercially available drug-like compounds. From these results new potential ligands were designed that fitted appropriately into the active site and matched the functional groups and binding motifs founds therein. Some of these compounds and close analogues were then synthesized and submitted for biological evaluation. As a separate part of this thesis, analogues of very active anti-tuberculosis pyridylcarboxamidrazone were also prepared. This was carried out by the addition and the deletion of the substitutions from the lead compound thereby preparing heteroaryl carboxamidrazone derivatives and related compounds. All these compounds were initially evaluated for biological activity against various gram positive organisms and then sent to the TAACF (USA) for screening against Mycobacterium tuberculosis. Some of the new compounds proved to be at least as potent as the original lead compound but less toxic.
Resumo:
The purpose of this study was to determine if there was an objective difference in reading between four commonly available lamps, of varying spectral radiance, for 13 subjects with age-related maculopathy (ARM) or non-exudative age-related macular degeneration (AMD) - logMAR visual acuity between 0.04 and 0.68. At a constant illuminance of 2000 lux, there was no interaction between ARM and AMD subgroups and no statistically significant difference between the lamps: standard (clear envelope) incandescent, daylight simulation (blue tint envelope) incandescent, compact fluorescent and halogen incandescent, for any reading outcome measure (threshold print size p = 0.67, critical print size p = 0.74, acuity reserve p = 0.84 and mean reading rate p = 0.78). For lamps typically used in low-vision rehabilitation, a clinically significant effect of spectral radiance on reading for people with ARM or non-exudative AMD is unlikely. © 2007 The College of Optometrists.
Resumo:
Motivated by the increasing demand and challenges of video streaming in this thesis, we investigate methods by which the quality of the video can be improved. We utilise overlay networks that have been created by implemented relay nodes to produce path diversity, and show through analytical and simulation models for which environments path diversity can improve the packet loss probability. We take the simulation and analytical models further by implementing a real overlay network on top of Planetlab, and show that when the network conditions remain constant the video quality received by the client can be improved. In addition, we show that in the environments where path diversity improves the video quality forward error correction can be used to further enhance the quality. We then investigate the effect of IEEE 802.11e Wireless LAN standard with quality of service enabled on the video quality received by a wireless client. We find that assigning all the video to a single class outperforms a cross class assignment scheme proposed by other researchers. The issue of virtual contention at the access point is also examined. We increase the intelligence of our relay nodes and enable them to cache video, in order to maximise the usefulness of these caches. For this purpose, we introduce a measure, called the PSNR profit, and present an optimal caching method for achieving the maximum PSNR profit at the relay nodes where partitioned video contents are stored and provide an enhanced quality for the client. We also show that the optimised cache the degradation in the video quality received by the client becomes more graceful than the non-optimised system when the network experiences packet loss or is congested.
Resumo:
This dissertation studies the caching of queries and how to cache in an efficient way, so that retrieving previously accessed data does not need any intermediary nodes between the data-source peer and the querying peer in super-peer P2P network. A precise algorithm was devised that demonstrated how queries can be deconstructed to provide greater flexibility for reusing their constituent elements. It showed how subsequent queries can make use of more than one previous query and any part of those queries to reconstruct direct data communication with one or more source peers that have supplied data previously. In effect, a new query can search and exploit the entire cached list of queries to construct the list of the data locations it requires that might match any locations previously accessed. The new method increases the likelihood of repeat queries being able to reuse earlier queries and provides a viable way of by-passing shared data indexes in structured networks. It could also increase the efficiency of unstructured networks by reducing traffic and the propensity for network flooding. In addition, performance evaluation for predicting query routing performance by using a UML sequence diagram is introduced. This new method of performance evaluation provides designers with information about when it is most beneficial to use caching and how the peer connections can optimize its exploitation.
Resumo:
Purpose: To optimize anterior eye fluorescein viewing and image capture. Design: Prospective experimental investigation. Methods: The spectral radiance of ten different models of slit-lamp blue luminance and the spectral transmission of three barrier filters were measured. Optimal clinical instillation of fluorescein was evaluated by a comparison of four different instillation methods of fluorescein into 10 subjects. Two methods used a floret, and two used minims of different concentration. The resulting fluorescence was evaluated for quenching effects and efficiency over time. Results: Spectral radiance of the blue illumination typically had an average peak at 460 nm. Comparison between three slit-lamps of the same model showed a similar spectral radiance distribution. Of the slit-lamps examined, 8.3% to 50.6% of the illumination output was optimized for >80% fluorescein excitation, and 1.2% to 23.5% of the illumination overlapped with that emitted by the fluorophore. The barrier filters had an average cut-off at 510 to 520 nm. Quenching was observed for all methods of fluorescein instillation. The moistened floret and the 1% minim reached a useful level of fluorescence in on average ∼20s (∼2.5× faster than the saturated floret and 2% minim) and this lasted for ∼160 seconds. Conclusions: Most slit-lamps' blue light and yellow barrier filters are not optimal for fluorescein viewing and capture. Instillation of fluorescein using a moistened floret or 1% minim seems most clinically appropriate as lower quantities and concentrations of fluorescein improve the efficiency of clinical examination. © 2006 Elsevier Inc. All rights reserved.
Resumo:
One of main problems of corporate information systems is the precise evaluation of speed of transactions and the speed of making reports. The core of the problem is based on the DBMS that is used. Most DBMS which are oriented for high performance and reliability of transactions do not give fast access to analytical and summarized data and vice versa. It is quite difficult to estimate which class of database to use. The author of the article gives a concise observation of the problem and a possible way to be solved.
Resumo:
DBMS (Data base management systems) still have a very high price for small and middle enterprises in Bulgaria. Desktop versions are free but they cannot function in multi-user environment. We will try to make an application server which will make a Desktop version of a DBMS open to many users. Thus, this approach will be appropriate for client-server applications. The author of the article gives a concise observation of the problem and a possible way of solution.
Resumo:
Storage is a central part of computing. Driven by exponentially increasing content generation rate and a widening performance gap between memory and secondary storage, researchers are in the perennial quest to push for further innovation. This has resulted in novel ways to "squeeze" more capacity and performance out of current and emerging storage technology. Adding intelligence and leveraging new types of storage devices has opened the door to a whole new class of optimizations to save cost, improve performance, and reduce energy consumption. In this dissertation, we first develop, analyze, and evaluate three storage extensions. Our first extension tracks application access patterns and writes data in the way individual applications most commonly access it to benefit from the sequential throughput of disks. Our second extension uses a lower power flash device as a cache to save energy and turn off the disk during idle periods. Our third extension is designed to leverage the characteristics of both disks and solid state devices by placing data in the most appropriate device to improve performance and save power. In developing these systems, we learned that extending the storage stack is a complex process. Implementing new ideas incurs a prolonged and cumbersome development process and requires developers to have advanced knowledge of the entire system to ensure that extensions accomplish their goal without compromising data recoverability. Futhermore, storage administrators are often reluctant to deploy specific storage extensions without understanding how they interact with other extensions and if the extension ultimately achieves the intended goal. We address these challenges by using a combination of approaches. First, we simplify the storage extension development process with system-level infrastructure that implements core functionality commonly needed for storage extension development. Second, we develop a formal theory to assist administrators deploy storage extensions while guaranteeing that the given high level goals are satisfied. There are, however, some cases for which our theory is inconclusive. For such scenarios we present an experimental methodology that allows administrators to pick an extension that performs best for a given workload. Our evaluation demostrates the benefits of both the infrastructure and the formal theory.
Resumo:
Memory (cache, DRAM, and disk) is in charge of providing data and instructions to a computer's processor. In order to maximize performance, the speeds of the memory and the processor should be equal. However, using memory that always match the speed of the processor is prohibitively expensive. Computer hardware designers have managed to drastically lower the cost of the system with the use of memory caches by sacrificing some performance. A cache is a small piece of fast memory that stores popular data so it can be accessed faster. Modern computers have evolved into a hierarchy of caches, where a memory level is the cache for a larger and slower memory level immediately below it. Thus, by using caches, manufacturers are able to store terabytes of data at the cost of cheapest memory while achieving speeds close to the speed of the fastest one.^ The most important decision about managing a cache is what data to store in it. Failing to make good decisions can lead to performance overheads and over-provisioning. Surprisingly, caches choose data to store based on policies that have not changed in principle for decades. However, computing paradigms have changed radically leading to two noticeably different trends. First, caches are now consolidated across hundreds to even thousands of processes. And second, caching is being employed at new levels of the storage hierarchy due to the availability of high-performance flash-based persistent media. This brings four problems. First, as the workloads sharing a cache increase, it is more likely that they contain duplicated data. Second, consolidation creates contention for caches, and if not managed carefully, it translates to wasted space and sub-optimal performance. Third, as contented caches are shared by more workloads, administrators need to carefully estimate specific per-workload requirements across the entire memory hierarchy in order to meet per-workload performance goals. And finally, current cache write policies are unable to simultaneously provide performance and consistency guarantees for the new levels of the storage hierarchy.^ We addressed these problems by modeling their impact and by proposing solutions for each of them. First, we measured and modeled the amount of duplication at the buffer cache level and contention in real production systems. Second, we created a unified model of workload cache usage under contention to be used by administrators for provisioning, or by process schedulers to decide what processes to run together. Third, we proposed methods for removing cache duplication and to eliminate wasted space because of contention for space. And finally, we proposed a technique to improve the consistency guarantees of write-back caches while preserving their performance benefits.^
Resumo:
Electrical energy is an essential resource for the modern world. Unfortunately, its price has almost doubled in the last decade. Furthermore, energy production is also currently one of the primary sources of pollution. These concerns are becoming more important in data-centers. As more computational power is required to serve hundreds of millions of users, bigger data-centers are becoming necessary. This results in higher electrical energy consumption. Of all the energy used in data-centers, including power distribution units, lights, and cooling, computer hardware consumes as much as 80%. Consequently, there is opportunity to make data-centers more energy efficient by designing systems with lower energy footprint. Consuming less energy is critical not only in data-centers. It is also important in mobile devices where battery-based energy is a scarce resource. Reducing the energy consumption of these devices will allow them to last longer and re-charge less frequently. Saving energy in computer systems is a challenging problem. Improving a system's energy efficiency usually comes at the cost of compromises in other areas such as performance or reliability. In the case of secondary storage, for example, spinning-down the disks to save energy can incur high latencies if they are accessed while in this state. The challenge is to be able to increase the energy efficiency while keeping the system as reliable and responsive as before. This thesis tackles the problem of improving energy efficiency in existing systems while reducing the impact on performance. First, we propose a new technique to achieve fine grained energy proportionality in multi-disk systems; Second, we design and implement an energy-efficient cache system using flash memory that increases disk idleness to save energy; Finally, we identify and explore solutions for the page fetch-before-update problem in caching systems that can: (a) control better I/O traffic to secondary storage and (b) provide critical performance improvement for energy efficient systems.
Resumo:
The deployment of wireless communications coupled with the popularity of portable devices has led to significant research in the area of mobile data caching. Prior research has focused on the development of solutions that allow applications to run in wireless environments using proxy based techniques. Most of these approaches are semantic based and do not provide adequate support for representing the context of a user (i.e., the interpreted human intention.). Although the context may be treated implicitly it is still crucial to data management. In order to address this challenge this dissertation focuses on two characteristics: how to predict (i) the future location of the user and (ii) locations of the fetched data where the queried data item has valid answers. Using this approach, more complete information about the dynamics of an application environment is maintained. ^ The contribution of this dissertation is a novel data caching mechanism for pervasive computing environments that can adapt dynamically to a mobile user's context. In this dissertation, we design and develop a conceptual model and context aware protocols for wireless data caching management. Our replacement policy uses the validity of the data fetched from the server and the neighboring locations to decide which of the cache entries is less likely to be needed in the future, and therefore a good candidate for eviction when cache space is needed. The context aware driven prefetching algorithm exploits the query context to effectively guide the prefetching process. The query context is defined using a mobile user's movement pattern and requested information context. Numerical results and simulations show that the proposed prefetching and replacement policies significantly outperform conventional ones. ^ Anticipated applications of these solutions include biomedical engineering, tele-health, medical information systems and business. ^
Resumo:
This dissertation studies the context-aware application with its proposed algorithms at client side. The required context-aware infrastructure is discussed in depth to illustrate that such an infrastructure collects the mobile user’s context information, registers service providers, derives mobile user’s current context, distributes user context among context-aware applications, and provides tailored services. The approach proposed tries to strike a balance between the context server and mobile devices. The context acquisition is centralized at the server to ensure the reusability of context information among mobile devices, while context reasoning remains at the application level. Hence, a centralized context acquisition and distributed context reasoning are viewed as a better solution overall. The context-aware search application is designed and implemented at the server side. A new algorithm is proposed to take into consideration the user context profiles. By promoting feedback on the dynamics of the system, any prior user selection is now saved for further analysis such that it may contribute to help the results of a subsequent search. On the basis of these developments at the server side, various solutions are consequently provided at the client side. A proxy software-based component is set up for the purpose of data collection. This research endorses the belief that the proxy at the client side should contain the context reasoning component. Implementation of such a component provides credence to this belief in that the context applications are able to derive the user context profiles. Furthermore, a context cache scheme is implemented to manage the cache on the client device in order to minimize processing requirements and other resources (bandwidth, CPU cycle, power). Java and MySQL platforms are used to implement the proposed architecture and to test scenarios derived from user’s daily activities. To meet the practical demands required of a testing environment without the impositions of a heavy cost for establishing such a comprehensive infrastructure, a software simulation using a free Yahoo search API is provided as a means to evaluate the effectiveness of the design approach in a most realistic way. The integration of Yahoo search engine into the context-aware architecture design proves how context aware application can meet user demands for tailored services and products in and around the user’s environment. The test results show that the overall design is highly effective, providing new features and enriching the mobile user’s experience through a broad scope of potential applications.
Resumo:
The aim of this work is to evaluate the SEE sensitivity of a multi-core processor having implemented ECC and parity in their cache memories. Two different application scenarios are studied. The first one configures the multi-core in Asymmetric Multi-Processing mode running a memory-bound application, whereas the second one uses the Symmetric Multi-Processsing mode running a CPU-bound application. The experiments were validated through radiation ground testing performed with 14 MeV neutrons on the Freescale P2041 multi-core manufactured in 45nm SOI technology. A deep analysis of the observed errors in cache memories was carried-out in order to reveal vulnerabilities in the cache protection mechanisms. Critical zones like tag addresses were affected during the experiments. In addition, the results show that the sensitivity strongly depends on the application and the multi-processsing mode used.
Resumo:
Mémoire numérisé par la Direction des bibliothèques de l'Université de Montréal.