465 resultados para compiler backend


Relevância:

10.00% 10.00%

Publicador:

Resumo:

Increasingly scientists are using collections of software tools in their research. These tools are typically used in concert, often necessitating laborious and error-prone manual data reformatting and transfer. We present an intuitive workflow environment to support scientists with their research. The workflow, GPFlow, wraps legacy tools, presenting a high level, interactive web-based front end to scientists. The workflow backend is realized by a commercial grade workflow engine (Windows Workflow Foundation). The workflow model is inspired by spreadsheets and is novel in its support for an intuitive method of interaction enabling experimentation as required by many scientists, e.g. bioinformaticians. We apply GPFlow to two bioinformatics experiments and demonstrate its flexibility and simplicity.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Most one-round key exchange protocols provide only weak forward secrecy at best. Furthermore, one-round protocols with strong forward secrecy often break badly when faced with an adversary who can obtain ephemeral keys. We provide a characterisation of how strong forward secrecy can be achieved in one-round key exchange. Moreover, we show that protocols exist which provide strong forward secrecy and remain secure with weak forward secrecy even when the adversary is allowed to obtain ephemeral keys. We provide a compiler to achieve this for any existing secure protocol with weak forward secrecy.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In this paper, we present a new algorithm for boosting visual template recall performance through a process of visual expectation. Visual expectation dynamically modifies the recognition thresholds of learnt visual templates based on recently matched templates, improving the recall of sequences of familiar places while keeping precision high, without any feedback from a mapping backend. We demonstrate the performance benefits of visual expectation using two 17 kilometer datasets gathered in an outdoor environment at two times separated by three weeks. The visual expectation algorithm provides up to a 100% improvement in recall. We also combine the visual expectation algorithm with the RatSLAM SLAM system and show how the algorithm enables successful mapping

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Two-party key exchange (2PKE) protocols have been rigorously analyzed under various models considering different adversarial actions. However, the analysis of group key exchange (GKE) protocols has not been as extensive as that of 2PKE protocols. Particularly, an important security attribute called key compromise impersonation (KCI) resilience has been completely ignored for the case of GKE protocols. Informally, a protocol is said to provide KCI resilience if the compromise of the long-term secret key of a protocol participant A does not allow the adversary to impersonate an honest participant B to A. In this paper, we argue that KCI resilience for GKE protocols is at least as important as it is for 2PKE protocols. Our first contribution is revised definitions of security for GKE protocols considering KCI attacks by both outsider and insider adversaries. We also give a new proof of security for an existing two-round GKE protocol under the revised security definitions assuming random oracles. We then show how to achieve insider KCIR in a generic way using a known compiler in the literature. As one may expect, this additional security assurance comes at the cost of an extra round of communication. Finally, we show that a few existing protocols are not secure against outsider KCI attacks. The attacks on these protocols illustrate the necessity of considering KCI resilience for GKE protocols.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Maternal deaths have been a critical issue for women living in rural and remote areas. The need to travel long distances, the shortage of primary care providers such as physicians, specialists and nurses, and the closing of small hospitals have been problems identified in many rural areas. Some research work has been undertaken and a few techniques have been developed to remotely measure the physiological condition of pregnant women through sophisticated ultrasound equipment. There are numerous ways to reduce maternal deaths, and an important step is to select the right approaches to achieving this reduction. One such approach is the provision of decision support systems in rural and remote areas. Decision support systems (DSSs) have already shown a great potential in many health fields. This thesis proposes an ingenious decision support system (iDSS) based on the methodology of survey instruments and identification of significant variables to be used in iDSS using statistical analysis. A survey was undertaken with pregnant women and factorial experimental design was chosen to acquire sample size. Variables with good reliability in any one of the statistical techniques such as Chi-square, Cronbach’s á and Classification Tree were incorporated in the iDSS. The decision support system was developed with significant variables such as: Place of residence, Seeing the same doctor, Education, Tetanus injection, Baby weight, Previous baby born, Place of birth, Assisted delivery, Pregnancy parity, Doctor visits and Occupation. The ingenious decision support system was implemented with Visual Basic as front end and Microsoft SQL server management as backend. Outcomes of the ingenious decision support system include advice on Symptoms, Diet and Exercise to pregnant women. On conditional system was sent and validated by the gynaecologist. Another outcome of ingenious decision support system was to provide better pregnancy health awareness and reduce long distance travel, especially for women in rural areas. The proposed system has qualities such as usefulness, accuracy and accessibility.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This paper addresses development of an ingenious decision support system (iDSS) based on the methodology of survey instruments and identification of significant variables to be used in iDSS using statistical analysis. A survey was undertaken with pregnant women and factorial experimental design was chosen to acquire sample size. Variables with good reliability in any one of the statistical techniques such as Chi-square, Cronbach’s α and Classification Tree were incorporated in the iDSS. The ingenious decision support system was implemented with Visual Basic as front end and Microsoft SQL server management as backend. Outcome of the ingenious decision support system include advice on Symptoms, Diet and Exercise to pregnant women.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Although there are many approaches for developing secure programs, they are not necessarily helpful for evaluating the security of a pre-existing program. Software metrics promise an easy way of comparing the relative security of two programs or assessing the security impact of modifications to an existing one. Most studies in this area focus on high level source code but this approach fails to take compiler-specific code generation into account. In this work we describe a set of object-oriented Java bytecode security metrics which are capable of assessing the security of a compiled program from the point of view of potential information flow. These metrics can be used to compare the security of programs or assess the effect of program modifications on security using a tool which we have developed to automatically measure the security of a given Java bytecode program in terms of the accessibility of distinguished ‘classified’ attributes.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Light of Extinction presents a diverse series of views into the complex antics of a semi-autonomous gaggle of robotic actants. Audiences initially enter into the 'backend' of the experience to be rudely confronted with the raw, messy operations of a horde of object-manipulating robotic forms. Seen through viewing apertures these ‘things’ deny any opportunity to grasp their imagined order. Audiences then flow on into the 'front end' of the work where now, seen through another aperture, the very same forms seemingly coordinate a stunning deep-field choreography, floating lusciously within inky landscapes of media, noise and embodied sound. As one series of conceptions slip into extinction, so others flow on in. The idea of the 'extinction of human experience' expresses a projected fear for that which will disappear when biodiverse worlds have descended into an era of permanent darkness. ‘Light Of Extinction' re-positions this anthropomorphic lament in order to suggest a more rounded acknowledgement of what might still remain - suggesting the previously unacknowledged power and place of autonomous, synthetic creation. Momentary disbelief gives way to a relieving celebration of the imagined birth of ‘things’ – without need for staples such as conventional light or the harmonious lullabies of long-extinguished sounds.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Energy efficient embedded computing enables new application scenarios in mobile devices like software-defined radio and video processing. The hierarchical multiprocessor considered in this work may contain dozens or hundreds of resource efficient VLIW CPUs. Programming this number of CPU cores is a complex task requiring compiler support. The stream programming paradigm provides beneficial properties that help to support automatic partitioning. This work describes a compiler for streaming applications targeting the self-build hierarchical CoreVA-MPSoC multiprocessor platform. The compiler is supported by a programming model that is tailored to fit the streaming programming paradigm. We present a novel simulated-annealing (SA) based partitioning algorithm, called Smart SA. The overall speedup of Smart SA is 12.84 for an MPSoC with 16 CPU cores compared to a single CPU implementation. Comparison with a state of the art partitioning algorithm shows an average performance improvement of 34.07%.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Industrial production and supply chains face increased demands for mass customization and tightening regulations on the traceability of goods, leading to higher requirements concerning flexibility, adaptability, and transparency of processes. Technologies for the ’Internet of Things' such as smart products and semantic representations pave the way for future factories and supply chains to fulfill these challenging market demands. In this chapter a backend-independent approach for information exchange in open-loop production processes based on Digital Product Memories DPMs is presented. By storing order-related data directly on the item, relevant lifecycle information is attached to the product itself. In this way, information handover between several stages of the value chain with focus on the manufacturing phase of a product has been realized. In order to report best practices regarding the application of DPM in the domain of industrial production, system prototype implementations focusing on the use case of producing and handling a smart drug case are illustrated.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Embedded many-core architectures contain dozens to hundreds of CPU cores that are connected via a highly scalable NoC interconnect. Our Multiprocessor-System-on-Chip CoreVAMPSoC combines the advantages of tightly coupled bus-based communication with the scalability of NoC approaches by adding a CPU cluster as an additional level of hierarchy. In this work, we analyze different cluster interconnect implementations with 8 to 32 CPUs and compare them in terms of resource requirements and performance to hierarchical NoCs approaches. Using 28nm FD-SOI technology the area requirement for 32 CPUs and AXI crossbar is 5.59mm2 including 23.61% for the interconnect at a clock frequency of 830 MHz. In comparison, a hierarchical MPSoC with 4 CPU cluster and 8 CPUs in each cluster requires only 4.83mm2 including 11.61% for the interconnect. To evaluate the performance, we use a compiler for streaming applications to map programs to the different MPSoC configurations. We use this approach for a design-space exploration to find the most efficient architecture and partitioning for an application.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The 3D Water Chemistry Atlas is an intuitive, open source, Web-based system that enables the three-dimensional (3D) sub-surface visualization of ground water monitoring data, overlaid on the local geological model (formation and aquifer strata). This paper firstly describes the results of evaluating existing virtual globe technologies, which led to the decision to use the Cesium open source WebGL Virtual Globe and Map Engine as the underlying platform. Next it describes the backend database and search, filtering, browse and analysis tools that were developed to enable users to interactively explore the groundwater monitoring data and interpret it spatially and temporally relative to the local geological formations and aquifers via the Cesium interface. The result is an integrated 3D visualization system that enables environmental managers and regulators to assess groundwater conditions, identify inconsistencies in the data, manage impacts and risks and make more informed decisions about coal seam gas extraction, waste water extraction, and water reuse.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This paper describes the design and implementation of a high-level query language called Generalized Query-By-Rule (GQBR) which supports retrieval, insertion, deletion and update operations. This language, based on the formalism of database logic, enables the users to access each database in a distributed heterogeneous environment, without having to learn all the different data manipulation languages. The compiler has been implemented on a DEC 1090 system in Pascal.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In this paper we develop compilation techniques for the realization of applications described in a High Level Language (HLL) onto a Runtime Reconfigurable Architecture. The compiler determines Hyper Operations (HyperOps) that are subgraphs of a data flow graph (of an application) and comprise elementary operations that have strong producer-consumer relationship. These HyperOps are hosted on computation structures that are provisioned on demand at runtime. We also report compiler optimizations that collectively reduce the overheads of data-driven computations in runtime reconfigurable architectures. On an average, HyperOps offer a 44% reduction in total execution time and a 18% reduction in management overheads as compared to using basic blocks as coarse grained operations. We show that HyperOps formed using our compiler are suitable to support data flow software pipelining.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Data-flow analysis is an integral part of any aggressive optimizing compiler. We propose a framework for improving the precision of data-flow analysis in the presence of complex control-flow. W initially perform data-flow analysis to determine those control-flow merges which cause the loss in data-flow analysis precision. The control-flow graph of the program is then restructured such that performing data-flow analysis on the resulting restructured graph gives more precise results. The proposed framework is both simple, involving the familiar notion of product automata, and also general, since it is applicable to any forward data-flow analysis. Apart from proving that our restructuring process is correct, we also show that restructuring is effective in that it necessarily leads to more optimization opportunities. Furthermore, the framework handles the trade-off between the increase in data-flow precision and the code size increase inherent in the restructuring. We show that determining an optimal restructuring is NP-hard, and propose and evaluate a greedy strategy. The framework has been implemented in the Scale research compiler, and instantiated for the specific problem of Constant Propagation. On the SPECINT 2000 benchmark suite we observe an average speedup of 4% in the running times over Wegman-Zadeck conditional constant propagation algorithm and 2% over a purely path profile guided approach.