851 resultados para cache de radiance
Resumo:
MIPAS observations of temperature, water vapor, and ozone in October 2009 as derived with the scientific level-2 processor run by Karlsruhe Institute of Technology (KIT), Institute for Meteorology and Climate Research (IMK) and CSIC, Instituto de Astrofísica de Andalucía (IAA) and retrieved from version 4.67 level-1b data have been compared to co-located field campaign observations obtained during the MOHAVE-2009 campaign at the Table Mountain Facility near Pasadena, California in October 2009. The MIPAS measurements were validated regarding any potential biases of the profiles, and with respect to their precision estimates. The MOHAVE-2009 measurement campaign provided measurements of atmospheric profiles of temperature, water vapor/relative humidity, and ozone from the ground to the mesosphere by a suite of instruments including radiosondes, ozonesondes, frost point hygrometers, lidars, microwave radiometers and Fourier transform infra-red (FTIR) spectrometers. For MIPAS temperatures (version V4O_T_204), no significant bias was detected in the middle stratosphere; between 22 km and the tropopause MIPAS temperatures were found to be biased low by up to 2 K, while below the tropopause, they were found to be too high by the same amount. These findings confirm earlier comparisons of MIPAS temperatures to ECMWF data which revealed similar differences. Above 12 km up to 45 km, MIPAS water vapor (version V4O_H2O_203) is well within 10% of the data of all correlative instruments. The well-known dry bias of MIPAS water vapor above 50 km due to neglect of non-LTE effects in the current retrievals has been confirmed. Some instruments indicate that MIPAS water vapor might be biased high by 20 to 40% around 10 km (or 5 km below the tropopause), but a consistent picture from all comparisons could not be derived. MIPAS ozone (version V4O_O3_202) has a high bias of up to +0.9 ppmv around 37 km which is due to a non-identified continuum like radiance contribution. No further significant biases have been detected. Cross-comparison to co-located observations of other satellite instruments (Aura/MLS, ACE-FTS, AIRS) is provided as well.
Resumo:
We present the cacher and CodeDepends packages for R, which provide tools for (1) caching and analyzing the code for statistical analyses and (2) distributing these analyses to others in an efficient manner over the web. The cacher package takes objects created by evaluating R expressions and stores them in key-value databases. These databases of cached objects can subsequently be assembled into “cache packages” for distribution over the web. The cacher package also provides tools to help readers examine the data and code in a statistical analysis and reproduce, modify, or improve upon the results. In addition, readers can easily conduct alternate analyses of the data. The CodeDepends package provides complementary tools for analyzing and visualizing the code for a statistical analysis and this functionality has been integrated into the cacher package. In this chapter we describe the cacher and CodeDepends packages and provide examples of how they can be used for reproducible research.
Resumo:
The stashR package (a Set of Tools for Administering SHared Repositories) for R implements a simple key-value style database where character string keys are associated with data values. The key-value databases can be either stored locally on the user's computer or accessed remotely via the Internet. Methods specific to the stashR package allow users to share data repositories or access previously created remote data repositories. In particular, methods are available for the S4 classes localDB and remoteDB to insert, retrieve, or delete data from the database as well as to synchronize local copies of the data to the remote version of the database. Users efficiently access information from a remote database by retrieving only the data files indexed by user-specified keys and caching this data in a local copy of the remote database. The local and remote counterparts of the stashR package offer the potential to enhance reproducible research by allowing users of Sweave to cache their R computations for a research paper in a localDB database. This database can then be stored on the Internet as a remoteDB database. When readers of the research paper wish to reproduce the computations involved in creating a specific figure or calculating a specific numeric value, they can access the remoteDB database and obtain the R objects involved in the computation.
Resumo:
An important problem in computational biology is finding the longest common subsequence (LCS) of two nucleotide sequences. This paper examines the correctness and performance of a recently proposed parallel LCS algorithm that uses successor tables and pruning rules to construct a list of sets from which an LCS can be easily reconstructed. Counterexamples are given for two pruning rules that were given with the original algorithm. Because of these errors, performance measurements originally reported cannot be validated. The work presented here shows that speedup can be reliably achieved by an implementation in Unified Parallel C that runs on an Infiniband cluster. This performance is partly facilitated by exploiting the software cache of the MuPC runtime system. In addition, this implementation achieved speedup without bulk memory copy operations and the associated programming complexity of message passing.
Resumo:
As the performance gap between microprocessors and memory continues to increase, main memory accesses result in long latencies which become a factor limiting system performance. Previous studies show that main memory access streams contain significant localities and SDRAM devices provide parallelism through multiple banks and channels. These locality and parallelism have not been exploited thoroughly by conventional memory controllers. In this thesis, SDRAM address mapping techniques and memory access reordering mechanisms are studied and applied to memory controller design with the goal of reducing observed main memory access latency. The proposed bit-reversal address mapping attempts to distribute main memory accesses evenly in the SDRAM address space to enable bank parallelism. As memory accesses to unique banks are interleaved, the access latencies are partially hidden and therefore reduced. With the consideration of cache conflict misses, bit-reversal address mapping is able to direct potential row conflicts to different banks, further improving the performance. The proposed burst scheduling is a novel access reordering mechanism, which creates bursts by clustering accesses directed to the same rows of the same banks. Subjected to a threshold, reads are allowed to preempt writes and qualified writes are piggybacked at the end of the bursts. A sophisticated access scheduler selects accesses based on priorities and interleaves accesses to maximize the SDRAM data bus utilization. Consequentially burst scheduling reduces row conflict rate, increasing and exploiting the available row locality. Using a revised SimpleScalar and M5 simulator, both techniques are evaluated and compared with existing academic and industrial solutions. With SPEC CPU2000 benchmarks, bit-reversal reduces the execution time by 14% on average over traditional page interleaving address mapping. Burst scheduling also achieves a 15% reduction in execution time over conventional bank in order scheduling. Working constructively together, bit-reversal and burst scheduling successfully achieve a 19% speedup across simulated benchmarks.
Resumo:
Virtualization has become a common abstraction layer in modern data centers. By multiplexing hardware resources into multiple virtual machines (VMs) and thus enabling several operating systems to run on the same physical platform simultaneously, it can effectively reduce power consumption and building size or improve security by isolating VMs. In a virtualized system, memory resource management plays a critical role in achieving high resource utilization and performance. Insufficient memory allocation to a VM will degrade its performance dramatically. On the contrary, over-allocation causes waste of memory resources. Meanwhile, a VM’s memory demand may vary significantly. As a result, effective memory resource management calls for a dynamic memory balancer, which, ideally, can adjust memory allocation in a timely manner for each VM based on their current memory demand and thus achieve the best memory utilization and the optimal overall performance. In order to estimate the memory demand of each VM and to arbitrate possible memory resource contention, a widely proposed approach is to construct an LRU-based miss ratio curve (MRC), which provides not only the current working set size (WSS) but also the correlation between performance and the target memory allocation size. Unfortunately, the cost of constructing an MRC is nontrivial. In this dissertation, we first present a low overhead LRU-based memory demand tracking scheme, which includes three orthogonal optimizations: AVL-based LRU organization, dynamic hot set sizing and intermittent memory tracking. Our evaluation results show that, for the whole SPEC CPU 2006 benchmark suite, after applying the three optimizing techniques, the mean overhead of MRC construction is lowered from 173% to only 2%. Based on current WSS, we then predict its trend in the near future and take different strategies for different prediction results. When there is a sufficient amount of physical memory on the host, it locally balances its memory resource for the VMs. Once the local memory resource is insufficient and the memory pressure is predicted to sustain for a sufficiently long time, a relatively expensive solution, VM live migration, is used to move one or more VMs from the hot host to other host(s). Finally, for transient memory pressure, a remote cache is used to alleviate the temporary performance penalty. Our experimental results show that this design achieves 49% center-wide speedup.
Resumo:
In this report, we attempt to define the capabilities of the infrared satellite remote sensor, Multifunctional Transport Satellite-2 (MTSAT-2) (i.e. a geosynchronous instrument), in characterizing volcanic eruptive behavior in the highly active region of Indonesia. Sulfur dioxide data from NASA's Ozone Monitoring Instrument (OMI) (i.e. a polar orbiting instrument) are presented here for validation of the processes interpreted using the thermal infrared datasets. Data provided from two case studies are analyzed specifically for eruptive products producing large thermal anomalies (i.e. lava flows, lava domes, etc.), volcanic ash and SO2 clouds; three distinctly characteristic and abundant volcanic emissions. Two primary methods used for detection of heat signatures are used and compared in this report including, single-channel thermal radiance (4-µm) and the normalized thermal index (NTI) algorithm. For automated purposes, fixed thresholds must be determined for these methods. A base minimum detection limit (MDL) for single-channel thermal radiance of 2.30E+05 Wm- 2sr-1m-1 and -0.925 for NTI generate false alarm rates of 35.78% and 34.16%, respectively. A spatial comparison method, developed here specifically for use in Indonesia and used as a second parameter for detection, is implemented to address the high false alarm rate. For the single-channel thermal radiance method, the utilization of the spatial comparison method eliminated 100% of the false alarms while maintaining every true anomaly. The NTI algorithm showed similar results with only 2 false alarms remaining. No definitive difference is observed between the two thermal detection methods for automated use; however, the single-channel thermal radiance method coupled with the SO2 mass abundance data can be used to interpret volcanic processes including the identification of lava dome activity at Sinabung as well as the mechanism for the dome emplacement (i.e. endogenous or exogenous). Only one technique, the brightness temperature difference (BTD) method, is used for the detection of ash. Trends of ash area, water/ice area, and their respective concentrations yield interpretations of increased ice formation, aggregation, and sedimentation processes that only a high-temporal resolution instrument like the MTSAT-2 can analyze. A conceptual model of a secondary zone of aggregation occurring in the migrating Kelut ash cloud, which decreases the distal fine-ash component and hazards to flight paths, is presented in this report. Unfortunately, SO2 data was unable to definitively reinforce the concept of a secondary zone of aggregation due to the lack of a sufficient temporal resolution. However, a detailed study of the Kelut SO2 cloud is used to determine that there was no climatic impacts generated from this eruption due to the atmospheric residence times and e-folding rate of ~14 days for the SO2. This report applies the complementary assets offered by utilizing a high-temporal and a high-spatial resolution satellite, and it demonstrates that these two instruments can provide unparalleled observations of dynamic volcanic processes.
Resumo:
The article focuses on the current situation of Spanish case law on ISP liability. It starts by presenting the more salient peculiarities of the Spanish transposition of the safe harbours laid down in the E-Commerce Directive. These peculiarities relate to the knowledge requirement of the hosting safe harbour, and to the safe harbour for information location tools. The article then provides an overview of the cases decided so far with regard to each of the safe harbours. Very few cases have dealt with the mere conduit and the caching safe harbours, though the latter was discussed in an interesting case involving Google’s cache. Most cases relate to hosting and linking safe harbours. With regard to hosting, the article focuses particularly on the two judgments handed down by the Supreme Court that hold an open interpretation of actual knowledge, an issue where courts had so far been split. Cases involving the linking safe harbour have mainly dealt with websites offering P2P download links. Accordingly, the article explores the legal actions brought against these sites, which for the moment have been unsuccessful. The new legislative initiative to fight against digital piracy – the Sustainable Economy Bill – is also analyzed. After the conclusion, the article provides an Annex listing the cases that have dealt with ISP liability in Spain since the safe harbours scheme was transposed into Spanish law.
Resumo:
On 3 April 2012, the Spanish Supreme Court issued a major ruling in favour of the Google search engine, including its ‘cache copy’ service: Sentencia n.172/2012, of 3 April 2012, Supreme Court, Civil Chamber.* The importance of this ruling lies not so much in the circumstances of the case (the Supreme Court was clearly disgusted by the claimant’s ‘maximalist’ petitum to shut down the whole operation of the search engine), but rather on the court going beyond the text of the Copyright Act into the general principles of the law and case law, and especially on the reading of the three-step test (in Art. 40bis TRLPI) in a positive sense so as to include all these principles. After accepting that none of the limitations listed in the Spanish Copyright statute (TRLPI) exempted the unauthorized use of fragments of the contents of a personal website through the Google search engine and cache copy service, the Supreme Court concluded against infringement, based on the grounds that the three-step test (in Art. 40bis TRLPI) is to be read not only in a negative manner but also in a positive sense so as to take into account that intellectual property – as any other kind of property – is limited in nature and must endure any ius usus inocui (harmless uses by third parties) and must abide to the general principles of the law, such as good faith and prohibition of an abusive exercise of rights (Art. 7 Spanish Civil Code).The ruling is a major success in favour of a flexible interpretation and application of the copyright statutes, especially in the scenarios raised by new technologies and market agents, and in favour of using the three-step test as a key tool to allow for it.