145 resultados para general matrix-matrix multiplication


Relevância:

90.00% 90.00%

Publicador:

Resumo:

Authenticated Encryption (AE) is the cryptographic process of providing simultaneous confidentiality and integrity protection to messages. This approach is more efficient than applying a two-step process of providing confidentiality for a message by encrypting the message, and in a separate pass providing integrity protection by generating a Message Authentication Code (MAC). AE using symmetric ciphers can be provided by either stream ciphers with built in authentication mechanisms or block ciphers using appropriate modes of operation. However, stream ciphers have the potential for higher performance and smaller footprint in hardware and/or software than block ciphers. This property makes stream ciphers suitable for resource constrained environments, where storage and computational power are limited. There have been several recent stream cipher proposals that claim to provide AE. These ciphers can be analysed using existing techniques that consider confidentiality or integrity separately; however currently there is no existing framework for the analysis of AE stream ciphers that analyses these two properties simultaneously. This thesis introduces a novel framework for the analysis of AE using stream cipher algorithms. This thesis analyzes the mechanisms for providing confidentiality and for providing integrity in AE algorithms using stream ciphers. There is a greater emphasis on the analysis of the integrity mechanisms, as there is little in the public literature on this, in the context of authenticated encryption. The thesis has four main contributions as follows. The first contribution is the design of a framework that can be used to classify AE stream ciphers based on three characteristics. The first classification applies Bellare and Namprempre's work on the the order in which encryption and authentication processes take place. The second classification is based on the method used for accumulating the input message (either directly or indirectly) into the into the internal states of the cipher to generate a MAC. The third classification is based on whether the sequence that is used to provide encryption and authentication is generated using a single key and initial vector, or two keys and two initial vectors. The second contribution is the application of an existing algebraic method to analyse the confidentiality algorithms of two AE stream ciphers; namely SSS and ZUC. The algebraic method is based on considering the nonlinear filter (NLF) of these ciphers as a combiner with memory. This method enables us to construct equations for the NLF that relate the (inputs, outputs and memory of the combiner) to the output keystream. We show that both of these ciphers are secure from this type of algebraic attack. We conclude that using a keydependent SBox in the NLF twice, and using two different SBoxes in the NLF of ZUC, prevents this type of algebraic attack. The third contribution is a new general matrix based model for MAC generation where the input message is injected directly into the internal state. This model describes the accumulation process when the input message is injected directly into the internal state of a nonlinear filter generator. We show that three recently proposed AE stream ciphers can be considered as instances of this model; namely SSS, NLSv2 and SOBER-128. Our model is more general than a previous investigations into direct injection. Possible forgery attacks against this model are investigated. It is shown that using a nonlinear filter in the accumulation process of the input message when either the input message or the initial states of the register is unknown prevents forgery attacks based on collisions. The last contribution is a new general matrix based model for MAC generation where the input message is injected indirectly into the internal state. This model uses the input message as a controller to accumulate a keystream sequence into an accumulation register. We show that three current AE stream ciphers can be considered as instances of this model; namely ZUC, Grain-128a and Sfinks. We establish the conditions under which the model is susceptible to forgery and side-channel attacks.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

A common problem with the use of tensor modeling in generating quality recommendations for large datasets is scalability. In this paper, we propose the Tensor-based Recommendation using Probabilistic Ranking method that generates the reconstructed tensor using block-striped parallel matrix multiplication and then probabilistically calculates the preferences of user to rank the recommended items. Empirical analysis on two real-world datasets shows that the proposed method is scalable for large tensor datasets and is able to outperform the benchmarking methods in terms of accuracy.

Relevância:

50.00% 50.00%

Publicador:

Resumo:

This article explores two matrix methods to induce the ``shades of meaning" (SoM) of a word. A matrix representation of a word is computed from a corpus of traces based on the given word. Non-negative Matrix Factorisation (NMF) and Singular Value Decomposition (SVD) compute a set of vectors corresponding to a potential shade of meaning. The two methods were evaluated based on loss of conditional entropy with respect to two sets of manually tagged data. One set reflects concepts generally appearing in text, and the second set comprises words used for investigations into word sense disambiguation. Results show that for NMF consistently outperforms SVD for inducing both SoM of general concepts as well as word senses. The problem of inducing the shades of meaning of a word is more subtle than that of word sense induction and hence relevant to thematic analysis of opinion where nuances of opinion can arise.

Relevância:

50.00% 50.00%

Publicador:

Resumo:

Objective To discuss generalized estimating equations as an extension of generalized linear models by commenting on the paper of Ziegler and Vens "Generalized Estimating Equations. Notes on the Choice of the Working Correlation Matrix". Methods Inviting an international group of experts to comment on this paper. Results Several perspectives have been taken by the discussants. Econometricians have established parallels to the generalized method of moments (GMM). Statisticians discussed model assumptions and the aspect of missing data Applied statisticians; commented on practical aspects in data analysis. Conclusions In general, careful modeling correlation is encouraged when considering estimation efficiency and other implications, and a comparison of choosing instruments in GMM and generalized estimating equations, (GEE) would be worthwhile. Some theoretical drawbacks of GEE need to be further addressed and require careful analysis of data This particularly applies to the situation when data are missing at random.

Relevância:

50.00% 50.00%

Publicador:

Resumo:

Purification of drinking water is routinely achieved by use of conventional coagulants and disinfection procedures. However, there are instances such as flood events when the level of turbidity reaches extreme levels while NOM may be an issue throughout the year. Consequently, there is a need to develop technologies which can effectively treat water of high turbidity during flood events and natural organic matter (NOM) content year round. It was our hypothesis that pebble matrix filtration potentially offered a relatively cheap, simple and reliable means to clarify such challenging water samples. Therefore, a laboratory scale pebble matrix filter (PMF) column was used to evaluate the turbidity and natural organic matter (NOM) pre-treatment performance in relation to 2013 Brisbane River flood water. Since the high turbidity was only a seasonal and short term problem, the general applicability of pebble matrix filters for NOM removal was also investigated. A 1.0 m deep bed of pebbles (the matrix) partly in-filled with either sand or crushed glass was tested, upon which was situated a layer of granular activated carbon (GAC). Turbidity was measured as a surrogate for suspended solids (SS), whereas, total organic carbon (TOC) and UV Absorbance at 254 nm were measured as surrogate parameters for NOM. Experiments using natural flood water showed that without the addition of any chemical coagulants, PMF columns achieved at least 50% turbidity reduction when the source water contained moderate hardness levels. For harder water samples, above 85% turbidity reduction was obtained. The ability to remove 50% turbidity without chemical coagulants may represent significant cost savings to water treatment plants and added environmental benefits accrue due to less sludge formation. A TOC reduction of 35-47% and UV-254 nm reduction of 24-38% was also observed. In addition to turbidity removal during flood periods, the ability to remove NOM using the pebble matrix filter throughout the year may have the benefit of reducing disinfection by-products (DBP) formation potential and coagulant demand at water treatment plants. Final head losses were remarkably low, reaching only 11 cm at a filtration velocity of 0.70 m/h.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Ameliorated strategies were put forward to improve the model predictive control in reducing the wind induced vibration of spatial latticed structures. The dynamic matrix control (DMC) predictive method was used and the reference trajectory which is called the decaying functions was suggested for the analysis of spatial latticed structure (SLS) under wind loads. The wind-induced vibration control model of SLS with improved DMC model predictive control was illustrated, then the different feedback strategies were investigated and a typical SLS was taken as example to investigate the reduction of wind-induced vibration. In addition, the robustness and reliability of DMC strategy were discussed by varying the model configurations.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

The ideas for this CRC research project are based directly on Sidwell, Kennedy and Chan (2002). That research examined a number of case studies to identify the characteristics of successful projects. The findings were used to construct a matrix of best practice project delivery strategies. The purpose of this literature review is to test the decision matrix against established theory and best practice in the subject of construction project management.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

The Co-operative Research Centre for Construction Innovation (CRC-CI) is funding a project known as Value Alignment Process for Project Delivery. The project consists of a study of best practice project delivery and the development of a suite of products, resources and services to guide project teams towards the best procurement approach for a specific project or group of projects. These resources will be focused on promoting the principles that underlie best practice project delivery rather than simply identifying an off-the-shelf procurement system. This project builds on earlier work by Sidwell, Kennedy and Chan (2002), on re-engineering the construction delivery process, which developed a procurement framework in the form of a Decision Matrix

Relevância:

40.00% 40.00%

Publicador:

Resumo:

The effective management of bridge stock involves making decisions as to when to repair, remedy, or do nothing, taking into account the financial and service life implications. Such decisions require a reliable diagnosis as to the cause of distress and an understanding of the likely future degradation. Such diagnoses are based on a combination of visual inspections, laboratory tests on samples and expert opinions. In addition, the choice of appropriate laboratory tests requires an understanding of the degradation mechanisms involved. Under these circumstances, the use of expert systems or evaluation tools developed from “realtime” case studies provides a promising solution in the absence of expert knowledge. This paper addresses the issues in bridge infrastructure management in Queensland, Australia. Bridges affected by alkali silica reaction and chloride induced corrosion have been investigated and the results presented using a mind mapping tool. The analysis highights that several levels of rules are required to assess the mechanism causing distress. The systematic development of a rule based approach is presented. An example of this application to a case study bridge has been used to demonstrate that preliminary results are satisfactory.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

One of the key issues facing public asset owners is the decision of refurbishing aged built assets. This decision requires an assessment of the “remaining service life” of the key components in a building. The remaining service life is significantly dependent upon the existing condition of the asset and future degradation patterns considering durability and functional obsolescence. Recently developed methods on Residual Service Life modelling, require sophisticated data that are not readily available. Most of the data available are in the form of reports prior to undertaking major repairs or in the form of sessional audit reports. Valuable information from these available sources can serve as bench marks for estimating the reference service life. The authors have acquired similar informations from a public asset building in Melbourne. Using these informations, the residual service life of a case study building façade has been estimated in this paper based on state-of-the-art approaches. These estimations have been evaluated against expert opinion. Though the results are encouraging it is clear that the state-of-the-art methodologies can only provide meaningful estimates provided the level and quality of data are available. This investigation resulted in the development of a new framework for maintenance that integrates the condition assessment procedures and factors influencing residual service life

Relevância:

40.00% 40.00%

Publicador:

Resumo:

The main objective of this PhD was to further develop Bayesian spatio-temporal models (specifically the Conditional Autoregressive (CAR) class of models), for the analysis of sparse disease outcomes such as birth defects. The motivation for the thesis arose from problems encountered when analyzing a large birth defect registry in New South Wales. The specific components and related research objectives of the thesis were developed from gaps in the literature on current formulations of the CAR model, and health service planning requirements. Data from a large probabilistically-linked database from 1990 to 2004, consisting of fields from two separate registries: the Birth Defect Registry (BDR) and Midwives Data Collection (MDC) were used in the analyses in this thesis. The main objective was split into smaller goals. The first goal was to determine how the specification of the neighbourhood weight matrix will affect the smoothing properties of the CAR model, and this is the focus of chapter 6. Secondly, I hoped to evaluate the usefulness of incorporating a zero-inflated Poisson (ZIP) component as well as a shared-component model in terms of modeling a sparse outcome, and this is carried out in chapter 7. The third goal was to identify optimal sampling and sample size schemes designed to select individual level data for a hybrid ecological spatial model, and this is done in chapter 8. Finally, I wanted to put together the earlier improvements to the CAR model, and along with demographic projections, provide forecasts for birth defects at the SLA level. Chapter 9 describes how this is done. For the first objective, I examined a series of neighbourhood weight matrices, and showed how smoothing the relative risk estimates according to similarity by an important covariate (i.e. maternal age) helped improve the model’s ability to recover the underlying risk, as compared to the traditional adjacency (specifically the Queen) method of applying weights. Next, to address the sparseness and excess zeros commonly encountered in the analysis of rare outcomes such as birth defects, I compared a few models, including an extension of the usual Poisson model to encompass excess zeros in the data. This was achieved via a mixture model, which also encompassed the shared component model to improve on the estimation of sparse counts through borrowing strength across a shared component (e.g. latent risk factor/s) with the referent outcome (caesarean section was used in this example). Using the Deviance Information Criteria (DIC), I showed how the proposed model performed better than the usual models, but only when both outcomes shared a strong spatial correlation. The next objective involved identifying the optimal sampling and sample size strategy for incorporating individual-level data with areal covariates in a hybrid study design. I performed extensive simulation studies, evaluating thirteen different sampling schemes along with variations in sample size. This was done in the context of an ecological regression model that incorporated spatial correlation in the outcomes, as well as accommodating both individual and areal measures of covariates. Using the Average Mean Squared Error (AMSE), I showed how a simple random sample of 20% of the SLAs, followed by selecting all cases in the SLAs chosen, along with an equal number of controls, provided the lowest AMSE. The final objective involved combining the improved spatio-temporal CAR model with population (i.e. women) forecasts, to provide 30-year annual estimates of birth defects at the Statistical Local Area (SLA) level in New South Wales, Australia. The projections were illustrated using sixteen different SLAs, representing the various areal measures of socio-economic status and remoteness. A sensitivity analysis of the assumptions used in the projection was also undertaken. By the end of the thesis, I will show how challenges in the spatial analysis of rare diseases such as birth defects can be addressed, by specifically formulating the neighbourhood weight matrix to smooth according to a key covariate (i.e. maternal age), incorporating a ZIP component to model excess zeros in outcomes and borrowing strength from a referent outcome (i.e. caesarean counts). An efficient strategy to sample individual-level data and sample size considerations for rare disease will also be presented. Finally, projections in birth defect categories at the SLA level will be made.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Botanical matrix is a graphic map produced via a process involving an initial site installation (350 m contour transect), a botanical survey and photographic documentation of species. The site is a housing subdivision at Point Henry, on the SE coast of Western Australia which is a landscape which is host the most botanically diverse vegetation found worldwide - known locally as 'kwongan'. Notoriously difficult vegetation to measure and map, kwongan is a visual 'engima', for paradoxically it appears to the lay person as visually bland and highly homogenous. There is thus is a critical need for the development of new forms of representation which overcome the barriers between the perception and reality of this botanical condition. Botanical Matrix is one result of the author's research which seeks to address this important problem.