60 resultados para Case Based Computing
Resumo:
Hardware designers and engineers typically need to explore a multi-parametric design space in order to find the best configuration for their designs using simulations that can take weeks to months to complete. For example, designers of special purpose chips need to explore parameters such as the optimal bitwidth and data representation. This is the case for the development of complex algorithms such as Low-Density Parity-Check (LDPC) decoders used in modern communication systems. Currently, high-performance computing offers a wide set of acceleration options, that range from multicore CPUs to graphics processing units (GPUs) and FPGAs. Depending on the simulation requirements, the ideal architecture to use can vary. In this paper we propose a new design flow based on OpenCL, a unified multiplatform programming model, which accelerates LDPC decoding simulations, thereby significantly reducing architectural exploration and design time. OpenCL-based parallel kernels are used without modifications or code tuning on multicore CPUs, GPUs and FPGAs. We use SOpenCL (Silicon to OpenCL), a tool that automatically converts OpenCL kernels to RTL for mapping the simulations into FPGAs. To the best of our knowledge, this is the first time that a single, unmodified OpenCL code is used to target those three different platforms. We show that, depending on the design parameters to be explored in the simulation, on the dimension and phase of the design, the GPU or the FPGA may suit different purposes more conveniently, providing different acceleration factors. For example, although simulations can typically execute more than 3x faster on FPGAs than on GPUs, the overhead of circuit synthesis often outweighs the benefits of FPGA-accelerated execution.
Resumo:
In modern semiconductor manufacturing facilities maintenance strategies are increasingly shifting from traditional preventive maintenance (PM) based approaches to more efficient and sustainable predictive maintenance (PdM) approaches. This paper describes the development of such an online PdM module for the endpoint detection system of an ion beam etch tool in semiconductor manufacturing. The developed system uses optical emission spectroscopy (OES) data from the endpoint detection system to estimate the RUL of lenses, a key detector component that degrades over time. Simulation studies for historical data for the use case demonstrate the effectiveness of the proposed PdM solution and the potential for improved sustainability that it affords.
Resumo:
Digital signatures are an important primitive for building secure systems and are used in most real-world security protocols. However, almost all popular signature schemes are either based on the factoring assumption (RSA) or the hardness of the discrete logarithm problem (DSA/ECDSA). In the case of classical cryptanalytic advances or progress on the development of quantum computers, the hardness of these closely related problems might be seriously weakened. A potential alternative approach is the construction of signature schemes based on the hardness of certain lattice problems that are assumed to be intractable by quantum computers. Due to significant research advancements in recent years, lattice-based schemes have now become practical and appear to be a very viable alternative to number-theoretic cryptography. In this article, we focus on recent developments and the current state of the art in lattice-based digital signatures and provide a comprehensive survey discussing signature schemes with respect to practicality. Additionally, we discuss future research areas that are essential for the continued development of lattice-based cryptography.
Resumo:
Embedded memories account for a large fraction of the overall silicon area and power consumption in modern SoC(s). While embedded memories are typically realized with SRAM, alternative solutions, such as embedded dynamic memories (eDRAM), can provide higher density and/or reduced power consumption. One major challenge that impedes the widespread adoption of eDRAM is that they require frequent refreshes potentially reducing the availability of the memory in periods of high activity and also consuming significant amount of power due to such frequent refreshes. Reducing the refresh rate while on one hand can reduce the power overhead, if not performed in a timely manner, can cause some cells to lose their content potentially resulting in memory errors. In this paper, we consider extending the refresh period of gain-cell based dynamic memories beyond the worst-case point of failure, assuming that the resulting errors can be tolerated when the use-cases are in the domain of inherently error-resilient applications. For example, we observe that for various data mining applications, a large number of memory failures can be accepted with tolerable imprecision in output quality. In particular, our results indicate that by allowing as many as 177 errors in a 16 kB memory, the maximum loss in output quality is 11%. We use this failure limit to study the impact of relaxing reliability constraints on memory availability and retention power for different technologies.
Resumo:
AIM: To evaluate the association between various lifestyle factors and achalasia risk.
METHODS: A population-based case-control study was conducted in Northern Ireland, including n= 151 achalasia cases and n = 117 age- and sex-matched controls. Lifestyle factors were assessed via a face-to-face structured interview. The association between achalasia and lifestyle factors was assessed by unconditional logistic regression, to produce odds ratios (OR) and 95% confidence interval (CI).
RESULTS: Individuals who had low-class occupations were at the highest risk of achalasia (OR = 1.88, 95%CI: 1.02-3.45), inferring that high-class occupation holders have a reduced risk of achalasia. A history of foreign travel, a lifestyle factor linked to upper socio-economic class, was also associated with a reduced risk of achalasia (OR = 0.59, 95%CI: 0.35-0.99). Smoking and alcohol consumption carried significantly reduced risks of achalasia, even after adjustment for socio-economic status. The presence of pets in the house was associated with a two-fold increased risk of achalasia (OR = 2.00, 95%CI: 1.17-3.42). No childhood household factors were associated with achalasia risk.
CONCLUSION: Achalasia is a disease of inequality, and individuals from low socio-economic backgrounds are at highest risk. This does not appear to be due to corresponding alcohol and smoking behaviours. An observed positive association between pet ownership and achalasia risk suggests an interaction between endotoxin and viral infection exposure in achalasia aetiology.
Resumo:
For those working in the humanitarian sector, achieving positive outcomes for postdisaster communities through reconstruction projects is a pressing concern. In the wake of recent natural disasters, NGOs have become increasingly involved in the permanent reconstruction of affected communities. They have encountered significant barriers as they implement reconstruction programmes and this paper argues that it is important to address the visible lack of innovation that is partially to blame. The theoretical bedrock of a current research project will be used as the starting point for this argument, the overall goal of which is to design a competency-based framework model that can be used by NGOs in post-disaster reconstruction projects. Drawing on established theories of management, a unique perspective has been developed from which a competency-based reconstruction theory emerges. This theoretical framework brings together 3 distinct fields; Disaster Management, Strategic Management and Project Management, each vital
to the success of the model. The objectives of this paper are a) to investigate the role of NGOs in post-disaster reconstruction and establish the current standard of practice b) to determine the extent to which NGOs have the opportunity to contribute to sustainable community development through reconstruction c) to outline the main factors of a theoretical framework first proposed by Von Meding et al. 2009 and d) to identify the innovative measures that can be taken by NGOs to achieve more positive outcomes in their interventions. It is important that NGOs involved in post-disaster reconstruction become familiar with concepts and strategies such as those contained in this paper. Competency-based organizational change on the basis of this theory has the potential to help define the standard of best practice to which future NGO projects might align themselves.
Resumo:
Government communication is an important management tool during a public health crisis, but understanding its impact is difficult. Strategies may be adjusted in reaction to developments on the ground and it is challenging to evaluate the impact of communication separately from other crisis management activities. Agent-based modeling is a well-established research tool in social science to respond to similar challenges. However, there have been few such models in public health. We use the example of the TELL ME agent-based model to consider ways in which a non-predictive policy model can assist policy makers. This model concerns individuals' protective behaviors in response to an epidemic, and the communication that influences such behavior. Drawing on findings from stakeholder workshops and the results of the model itself, we suggest such a model can be useful: (i) as a teaching tool, (ii) to test theory, and (iii) to inform data collection. We also plot a path for development of similar models that could assist with communication planning for epidemics.