146 resultados para Software metrics


Relevância:

20.00% 20.00%

Publicador:

Resumo:

This paper evaluates the viability of user-level software management of a hybrid DRAM/NVM main memory system. We propose an operating system (OS) and programming interface to place data from within the user application. We present a profiling tool to help programmers decide on the placement of application data in hybrid memory systems. Cycle-accurate simulation of modified applications confirms that our approach is more energy-efficient than state-of-the- art hardware or OS approaches at equivalent performance. Moreover, our results are validated on several candidate NVM technologies and a wide set of 14 benchmarks.
The key observation behind this work is that, for the work- loads we evaluated, application objects are too short-lived to motivate migration. Utilizing this property significantly reduces the hardware complexity of hybrid memory systems.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The end of Dennard scaling has pushed power consumption into a first order concern for current systems, on par with performance. As a result, near-threshold voltage computing (NTVC) has been proposed as a potential means to tackle the limited cooling capacity of CMOS technology. Hardware operating in NTV consumes significantly less power, at the cost of lower frequency, and thus reduced performance, as well as increased error rates. In this paper, we investigate if a low-power systems-on-chip, consisting of ARM's asymmetric big.LITTLE technology, can be an alternative to conventional high performance multicore processors in terms of power/energy in an unreliable scenario. For our study, we use the Conjugate Gradient solver, an algorithm representative of the computations performed by a large range of scientific and engineering codes.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The proposed multi-table lookup architecture provides SDN-based, high-performance packet classification in an OpenFlow v1.1+ SDN switch. The objective of the demonstration is to show the functionality of the architecture deployed on the NetFPGA SUME Platform.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The proposition of increased innovation in network applications and reduced cost for network operators has won over the networking world to the vision of Software-Defined Networking (SDN). With the excitement of holistic visibility across the network and the ability to program network devices, developers have rushed to present a range of new SDN-compliant hardware, software and services. However, amidst this frenzy of activity, one key element has only recently entered the debate: Network Security. In this article, security in SDN is surveyed presenting both the research community and industry advances in this area. The challenges to securing the network from the persistent attacker are discussed and the holistic approach to the security architecture that is required for SDN is described. Future research directions that will be key to providing network security in SDN are identified.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Cloud data centres are critical business infrastructures and the fastest growing service providers. Detecting anomalies in Cloud data centre operation is vital. Given the vast complexity of the data centre system software stack, applications and workloads, anomaly detection is a challenging endeavour. Current tools for detecting anomalies often use machine learning techniques, application instance behaviours or system metrics distribu- tion, which are complex to implement in Cloud computing environments as they require training, access to application-level data and complex processing. This paper presents LADT, a lightweight anomaly detection tool for Cloud data centres that uses rigorous correlation of system metrics, implemented by an efficient corre- lation algorithm without need for training or complex infrastructure set up. LADT is based on the hypothesis that, in an anomaly-free system, metrics from data centre host nodes and virtual machines (VMs) are strongly correlated. An anomaly is detected whenever correlation drops below a threshold value. We demonstrate and evaluate LADT using a Cloud environment, where it shows that the hosting node I/O operations per second (IOPS) are strongly correlated with the aggregated virtual machine IOPS, but this correlation vanishes when an application stresses the disk, indicating a node-level anomaly.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

We present a rigorous methodology and new metrics for fair comparison of server and microserver platforms. Deploying our methodology and metrics, we compare a microserver with ARM cores against two servers with ×86 cores running the same real-time financial analytics workload. We define workload-specific but platform-independent performance metrics for platform comparison, targeting both datacenter operators and end users. Our methodology establishes that a server based on the Xeon Phi co-processor delivers the highest performance and energy efficiency. However, by scaling out energy-efficient microservers, we achieve competitive or better energy efficiency than a power-equivalent server with two Sandy Bridge sockets, despite the microserver's slower cores. Using a new iso-QoS metric, we find that the ARM microserver scales enough to meet market throughput demand, that is, a 100% QoS in terms of timely option pricing, with as little as 55% of the energy consumed by the Sandy Bridge server.

Relevância:

20.00% 20.00%

Publicador:

Resumo:


This paper describes how urban agriculture differs from conventional agriculture not only in the way it engages with the technologies of growing, but also in the choice of crop and the way these are brought to market. The authors propose a new model for understanding these new relationships, which is analogous to a systems view of information technology, namely Hardware-Software- Interface.
The first component of the system is hardware. This is the technological component of the agricultural system. Technology is often thought of as equipment, but its linguistic roots are in ‘technis’ which means ‘know how’. Urban agriculture has to engage new technologies, ones that deal with the scale of operation and its context which is different than rural agriculture. Often the scale is very small, and soils are polluted. There this technology in agriculture could be technical such as aquaponic systems, or could be soil-based agriculture such as allotments, window-boxes, or permaculture. The choice of method does not necessarily determine the crop produced or its efficiency. This is linked to the biotic that is added to the hardware, which is seen as the ‘software’.
The software of the system are the ecological parts of the system. These produce the crop which may or may not be determined by the technology used. For example, a hydroponic system could produce a range of crops, or even fish or edible flowers. Software choice can be driven by ideological preferences such as permaculture, where companion planting is used to reduce disease and pests, or by economic factors such as the local market at a particular time of the year. The monetary value of the ‘software’ is determined by the market. Obviously small, locally produced crops are unlikely to compete against intensive products produced globally, however the value locally might be measured in different ways, and might be sold on a different market. This leads to the final part of the analogy - interface.
The interface is the link between the system and the consumer. In traditional agriculture, there is a tenuous link between the producer of asparagus in Peru and the consumer in Europe. In fact very little of the money spent by the consumer ever reaches the grower. Most of the money is spent on refrigeration, transport and profit for agents and supermarket chains. Local or hyper-local agriculture needs to bypass or circumvent these systems, and be connected more directly to the consumer. This is the interface. In hyper-localised systems effectiveness is often more important than efficiency, and direct links between producer and consumer create new economies.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The end of Dennard scaling has promoted low power consumption into a firstorder concern for computing systems. However, conventional power conservation schemes such as voltage and frequency scaling are reaching their limits when used in performance-constrained environments. New technologies are required to break the power wall while sustaining performance on future processors. Low-power embedded processors and near-threshold voltage computing (NTVC) have been proposed as viable solutions to tackle the power wall in future computing systems. Unfortunately, these technologies may also compromise per-core performance and, in the case of NTVC, xreliability. These limitations would make them unsuitable for HPC systems and datacenters. In order to demonstrate that emerging low-power processing technologies can effectively replace conventional technologies, this study relies on ARM’s big.LITTLE processors as both an actual and emulation platform, and state-of-the-art implementations of the CG solver. For NTVC in particular, the paper describes how efficient algorithm-based fault tolerance schemes preserve the power and energy benefits of very low voltage operation.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This paper discusses the use of primary frequency response metrics to assess the dynamics of frequency disturbance data with the presence of high system non synchronous penetration (SNSP) and system inertia variation. The Irish power system has been chosen as a study case as it experiences a significant level of SNSP from wind turbine generation and imported active power from HVDC interconnectors. Several recorded actual frequency disturbances were used in the analysis. These data were measured and collected from the Irish power system from October 2010 to June 2013. The paper has shown the impact of system inertia and SNSP variation on the performance of primary frequency response metrics, namely: nadir frequency, rate of change of frequency, inertial and primary frequency response.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Building Information Modelling (BIM) is continuing to evolve and develop as the construction industry progresses towards level 2 maturity. However, one of the core barriers in this progression is the aspect of interoperability between software packages. This research and paper stems from a Knowledge Transfer Partnership (KTP) where both industry and academia come together to address this shortcoming within the sector. One of the core objectives of this partnership and the aim of this study is investigating potential solutions to this barrier, while also developing best working practices to be applied in industry. Using one of the case studies from this partnership (a temporary steel structure), this paper demonstrates a potential solution to addressing interoperability within structural analysis and detailing packages, MasterSeries and Revit respectively. The findings of the research indicate that a process based approach rather than that of additional software coding as being the preferred solution. The results of this preliminary research will aid in the development of the topic of interoperability within the sector, while also developing the knowledge and competencies of the parties within the KTP. The findings are explored further, by providing an overview of the resolution process adopted in this case study, in overcoming the interoperability that arose as the project progressed. It is envisaged that this study will assist the construction sector and its adoption of BIM technologies, while also addressing the critical aspect of operability between software.