749 resultados para Consortial Implementations
Resumo:
Presentation of Robert H. McDonald at the Library Network Days, October 22, 2014 in Helsinki. – Esitys Kirjastoverkkopäivillä 22.10.2014 Helsingissä
Resumo:
We have recently developed a scaleable Artificial Boundary Inhomogeneity (ABI) method [Chem. Phys. Lett.366, 390–397 (2002)] based on the utilization of the Lanczos algorithm, and in this work explore an alternative iterative implementation based on the Chebyshev algorithm. Detailed comparisons between the two iterative methods have been made in terms of efficiency as well as convergence behavior. The Lanczos subspace ABI method was also further improved by the use of a simpler three-term backward recursion algorithm to solve the subspace linear system. The two different iterative methods are tested on the model collinear H+H2 reactive state-to-state scattering.
Resumo:
Cryptographic software development is a challenging eld: high performance must be achieved, while ensuring correctness and com- pliance with low-level security policies. CAO is a domain speci c language designed to assist development of cryptographic software. An important feature of this language is the design of a novel type system introducing native types such as prede ned sized vectors, matrices and bit strings, residue classes modulo an integer, nite elds and nite eld extensions, allowing for extensive static validation of source code. We present the formalisation, validation and implementation of this type system
Resumo:
Tese de Mestrado em Engenharia Informática
Resumo:
To increase the amount of logic available in SRAM-based FPGAs manufacturers are using nanometric technologies to boost logic density and reduce prices. However, nanometric scales are highly vulnerable to radiation-induced faults that affect values stored in memory cells. Since the functional definition of FPGAs relies on memory cells, they become highly prone to this type of faults. Fault tolerant implementations, based on triple modular redundancy (TMR) infrastructures, help to keep the correct operation of the circuit. However, TMR is not sufficient to guarantee the safe operation of a circuit. Other issues like the effects of multi-bit upsets (MBU) or fault accumulation, have also to be addressed. Furthermore, in case of a fault occurrence the correct operation of the affected module must be restored and the current state of the circuit coherently re-established. A solution that enables the autonomous correct restoration of the functional definition of the affected module, avoiding fault accumulation, re-establishing the correct circuit state in realtime, while keeping the normal operation of the circuit, is presented in this paper.
Resumo:
Trabalho apresentado no âmbito do Doutoramento em Informática, como requisito parcial para obtenção do grau de Doutor em Informática
Resumo:
Introduction of technologies in the workplace have led to a dramatic change. These changes have come with an increased capacity to gather data about one’s working performance (i.e. productivity), as well as the capacity to track one’s personal responses (i.e. emotional, physiological, etc.) to this changing workplace environment. This movement of self-monitoring or self-sensing using diverse types of wearable sensors combined with the use of computing has been identified as the Quantified-Self. Miniaturization of sensors, reduction in cost and a non-stop increase in the computer power capacity has led to a panacea of wearables and sensors to track and analyze all types of information. Utilized in the personal sphere to track information, a looming question remains, should employers use the information from the Quantified-Self to track their employees’ performance or well-being in the workplace and will this benefit employees? The aim of the present work is to layout the implications and challenges associated with the use of Quantified-Self information in the workplace. The Quantified-Self movement has enabled people to understand their personal life better by tracking multiple information and signals; such an approach could allow companies to gather knowledge on what drives productivity for their business and/or well-being of their employees. A discussion about the implications of this approach will cover 1) Monitoring health and well-being, 2) Oversight and safety, and 3) Mentoring and training. Challenges will address the question of 1) Privacy and Acceptability, 2) Scalability and 3) Creativity. Even though many questions remain regarding their use in the workplace, wearable technologies and Quantified-Self data in the workplace represent an exciting opportunity for the industry and health and safety practitioners who will be using them.
Resumo:
Developing and implementing data-oriented workflows for data migration processes are complex tasks involving several problems related to the integration of data coming from different schemas. Usually, they involve very specific requirements - every process is almost unique. Having a way to abstract their representation will help us to better understand and validate them with business users, which is a crucial step for requirements validation. In this demo we present an approach that provides a way to enrich incrementally conceptual models in order to support an automatic way for producing their correspondent physical implementation. In this demo we will show how B2K (Business to Kettle) system works transforming BPMN 2.0 conceptual models into Kettle data-integration executable processes, approaching the most relevant aspects related to model design and enrichment, model to system transformation, and system execution.
Resumo:
This work describes a test tool that allows to make performance tests of different end-to-end available bandwidth estimation algorithms along with their different implementations. The goal of such tests is to find the best-performing algorithm and its implementation and use it in congestion control mechanism for high-performance reliable transport protocols. The main idea of this paper is to describe the options which provide available bandwidth estimation mechanism for highspeed data transport protocols and to develop basic functionality of such test tool with which it will be possible to manage entities of test application on all involved testing hosts, aided by some middleware.
Resumo:
In this paper we investigate various algorithms for performing Fast Fourier Transformation (FFT)/Inverse Fast Fourier Transformation (IFFT), and proper techniques for maximizing the FFT/IFFT execution speed, such as pipelining or parallel processing, and use of memory structures with pre-computed values (look up tables -LUT) or other dedicated hardware components (usually multipliers). Furthermore, we discuss the optimal hardware architectures that best apply to various FFT/IFFT algorithms, along with their abilities to exploit parallel processing with minimal data dependences of the FFT/IFFT calculations. An interesting approach that is also considered in this paper is the application of the integrated processing-in-memory Intelligent RAM (IRAM) chip to high speed FFT/IFFT computing. The results of the assessment study emphasize that the execution speed of the FFT/IFFT algorithms is tightly connected to the capabilities of the FFT/IFFT hardware to support the provided parallelism of the given algorithm. Therefore, we suggest that the basic Discrete Fourier Transform (DFT)/Inverse Discrete Fourier Transform (IDFT) can also provide high performances, by utilizing a specialized FFT/IFFT hardware architecture that can exploit the provided parallelism of the DFT/IDF operations. The proposed improvements include simplified multiplications over symbols given in polar coordinate system, using sinе and cosine look up tables, and an approach for performing parallel addition of N input symbols.
Resumo:
In this paper we investigate various algorithms for performing Fast Fourier Transformation (FFT)/Inverse Fast Fourier Transformation (IFFT), and proper techniquesfor maximizing the FFT/IFFT execution speed, such as pipelining or parallel processing, and use of memory structures with pre-computed values (look up tables -LUT) or other dedicated hardware components (usually multipliers). Furthermore, we discuss the optimal hardware architectures that best apply to various FFT/IFFT algorithms, along with their abilities to exploit parallel processing with minimal data dependences of the FFT/IFFT calculations. An interesting approach that is also considered in this paper is the application of the integrated processing-in-memory Intelligent RAM (IRAM) chip to high speed FFT/IFFT computing. The results of the assessment study emphasize that the execution speed of the FFT/IFFT algorithms is tightly connected to the capabilities of the FFT/IFFT hardware to support the provided parallelism of the given algorithm. Therefore, we suggest that the basic Discrete Fourier Transform (DFT)/Inverse Discrete Fourier Transform (IDFT) can also provide high performances, by utilizing a specialized FFT/IFFT hardware architecture that can exploit the provided parallelism of the DFT/IDF operations. The proposed improvements include simplified multiplications over symbols given in polar coordinate system, using sinе and cosine look up tables,and an approach for performing parallel addition of N input symbols.
Resumo:
In this final project the high availability options for PostgreSQL database management system were explored and evaluated. The primary objective of the project was to find a reliable replication system and implement it to a production environment. The secondary objective was to explore different load balancing methods and compare their performance. The potential replication methods were thoroughly examined, and the most promising was implemented to a database system gathering weather information in Lithuania. The different load balancing methods were tested performance wise with different load scenarios and the results were analysed. As a result for this project a functioning PostgreSQL database replication system was built to the Lithuanian Hydrometeorological Service's headquarters, and definite guidelines for future load balancing needs were produced. This study includes the actual implementation of a replication system to a demanding production environment, but only guidelines for building a load balancing system to the same production environment.
Resumo:
Since the knowledge-based economy has become a fashion over the last few decades, the concept of the professional learning community (PLC) has started being accepted by educational institutions and governments as an effective framework to improve teachers’ collective work and collaboration. The purpose of this research was to compare and contrast the implementations of PLCs between Beijing schools and Ontario schools from principals’ personal narratives. In order to discover the lessons and widen the scope to understand the PLC, this research applied qualitative design to collect the data from two principal participants in each location by semistructured interviews. Four themes emerged: (a) structure and technology, (b) identity and climate, (c) task and support, and (d) change and challenge. This research found that the root of the characteristics of the PLCs in Beijing and Ontario was the different existing teaching and learning systems as well as the test systems. Teaching Research Groups (TRGs) is one of the systems that help Chinese to organize routine time and input resources to improve teachers’ professional development. However, Canadian schools lack a similar system that guarantees the time and resources. Moreover, standardized test plays different roles in China and Canada. In China, standardized tests, such as the college entrance examination, are regarded as the important purpose of education, whereas Ontario principals saw the Education Quality and Accountability Office (EQAO) as a tool rather than a primary purpose. These two main differences influenced principals’ beliefs, attitudes, strategies, and practices. The implications based on this discovery provide new perspectives for principals, teachers, policy makers, and scholars to widen and deepen the research and practice of the PLC.
Resumo:
We describe a model-data fusion (MDF) inter-comparison project (REFLEX), which compared various algorithms for estimating carbon (C) model parameters consistent with both measured carbon fluxes and states and a simple C model. Participants were provided with the model and with both synthetic net ecosystem exchange (NEE) of CO2 and leaf area index (LAI) data, generated from the model with added noise, and observed NEE and LAI data from two eddy covariance sites. Participants endeavoured to estimate model parameters and states consistent with the model for all cases over the two years for which data were provided, and generate predictions for one additional year without observations. Nine participants contributed results using Metropolis algorithms, Kalman filters and a genetic algorithm. For the synthetic data case, parameter estimates compared well with the true values. The results of the analyses indicated that parameters linked directly to gross primary production (GPP) and ecosystem respiration, such as those related to foliage allocation and turnover, or temperature sensitivity of heterotrophic respiration, were best constrained and characterised. Poorly estimated parameters were those related to the allocation to and turnover of fine root/wood pools. Estimates of confidence intervals varied among algorithms, but several algorithms successfully located the true values of annual fluxes from synthetic experiments within relatively narrow 90% confidence intervals, achieving >80% success rate and mean NEE confidence intervals <110 gC m−2 year−1 for the synthetic case. Annual C flux estimates generated by participants generally agreed with gap-filling approaches using half-hourly data. The estimation of ecosystem respiration and GPP through MDF agreed well with outputs from partitioning studies using half-hourly data. Confidence limits on annual NEE increased by an average of 88% in the prediction year compared to the previous year, when data were available. Confidence intervals on annual NEE increased by 30% when observed data were used instead of synthetic data, reflecting and quantifying the addition of model error. Finally, our analyses indicated that incorporating additional constraints, using data on C pools (wood, soil and fine roots) would help to reduce uncertainties for model parameters poorly served by eddy covariance data.