860 resultados para Vibration analysis techniques


Relevância:

90.00% 90.00%

Publicador:

Resumo:

String searching within a large corpus of data is an important component of digital forensic (DF) analysis techniques such as file carving. The continuing increase in capacity of consumer storage devices requires corresponding im-provements to the performance of string searching techniques. As string search-ing is a trivially-parallelisable problem, GPGPU approaches are a natural fit – but previous studies have found that local storage presents an insurmountable performance bottleneck. We show that this need not be the case with modern hardware, and demonstrate substantial performance improvements from the use of single and multiple GPUs when searching for strings within a typical forensic disk image.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The information on climate variations is essential for the research of many subjects, such as the performance of buildings and agricultural production. However, recorded meteorological data are often incomplete. There may be a limited number of locations recorded, while the number of recorded climatic variables and the time intervals can also be inadequate. Therefore, the hourly data of key weather parameters as required by many building simulation programmes are typically not readily available. To overcome this gap in measured information, several empirical methods and weather data generators have been developed. They generally employ statistical analysis techniques to model the variations of individual climatic variables, while the possible interactions between different weather parameters are largely ignored. Based on a statistical analysis of 10 years historical hourly climatic data over all capital cities in Australia, this paper reports on the finding of strong correlations between several specific weather variables. It is found that there are strong linear correlations between the hourly variations of global solar irradiation (GSI) and dry bulb temperature (DBT), and between the hourly variations of DBT and relative humidity (RH). With an increase in GSI, DBT would generally increase, while the RH tends to decrease. However, no such a clear correlation can be found between the DBT and atmospheric pressure (P), and between the DBT and wind speed. These findings will be useful for the research and practice in building performance simulation.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

OBJECTIVE The aim of this research project was to obtain an understanding of the barriers to and facilitators of providing palliative care in neonatal nursing. This article reports the first phase of this research: to develop and administer an instrument to measure the attitudes of neonatal nurses to palliative care. METHODS The instrument developed for this research (the Neonatal Palliative Care Attitude Scale) underwent face and content validity testing with an expert panel and was pilot tested to establish temporal stability. It was then administered to a population sample of 1285 neonatal nurses in Australian NICUs, with a response rate of 50% (N 645). Exploratory factor-analysis techniques were conducted to identify scales and subscales of the instrument. RESULTS Data-reduction techniques using principal components analysis were used. Using the criteria of eigenvalues being 1, the items in the Neonatal Palliative Care Attitude Scale extracted 6 factors, which accounted for 48.1% of the variance among the items. By further examining the questions within each factor and the Cronbach’s of items loading on each factor, factors were accepted or rejected. This resulted in acceptance of 3 factors indicating the barriers to and facilitators of palliative care practice. The constructs represented by these factors indicated barriers to and facilitators of palliative care practice relating to (1) the organization in which the nurse practices, (2) the available resources to support a palliative model of care, and (3) the technological imperatives and parental demands. CONCLUSIONS The subscales identified by this analysis identified items that measured both barriers to and facilitators of palliative care practice in neonatal nursing. While establishing preliminary reliability of the instrument by using exploratory factor-analysis techniques, further testing of this instrument with different samples of neonatal nurses is necessary using a confirmatory factor-analysis approach.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Denial-of-service attacks (DoS) and distributed denial-of-service attacks (DDoS) attempt to temporarily disrupt users or computer resources to cause service un- availability to legitimate users in the internetworking system. The most common type of DoS attack occurs when adversaries °ood a large amount of bogus data to interfere or disrupt the service on the server. The attack can be either a single-source attack, which originates at only one host, or a multi-source attack, in which multiple hosts coordinate to °ood a large number of packets to the server. Cryptographic mechanisms in authentication schemes are an example ap- proach to help the server to validate malicious tra±c. Since authentication in key establishment protocols requires the veri¯er to spend some resources before successfully detecting the bogus messages, adversaries might be able to exploit this °aw to mount an attack to overwhelm the server resources. The attacker is able to perform this kind of attack because many key establishment protocols incorporate strong authentication at the beginning phase before they can iden- tify the attacks. This is an example of DoS threats in most key establishment protocols because they have been implemented to support con¯dentiality and data integrity, but do not carefully consider other security objectives, such as availability. The main objective of this research is to design denial-of-service resistant mechanisms in key establishment protocols. In particular, we focus on the design of cryptographic protocols related to key establishment protocols that implement client puzzles to protect the server against resource exhaustion attacks. Another objective is to extend formal analysis techniques to include DoS- resistance. Basically, the formal analysis approach is used not only to analyse and verify the security of a cryptographic scheme carefully but also to help in the design stage of new protocols with a high level of security guarantee. In this research, we focus on an analysis technique of Meadows' cost-based framework, and we implement DoS-resistant model using Coloured Petri Nets. Meadows' cost-based framework is directly proposed to assess denial-of-service vulnerabil- ities in the cryptographic protocols using mathematical proof, while Coloured Petri Nets is used to model and verify the communication protocols using inter- active simulations. In addition, Coloured Petri Nets are able to help the protocol designer to clarify and reduce some inconsistency of the protocol speci¯cation. Therefore, the second objective of this research is to explore vulnerabilities in existing DoS-resistant protocols, as well as extend a formal analysis approach to our new framework for improving DoS-resistance and evaluating the performance of the new proposed mechanism. In summary, the speci¯c outcomes of this research include following results; 1. A taxonomy of denial-of-service resistant strategies and techniques used in key establishment protocols; 2. A critical analysis of existing DoS-resistant key exchange and key estab- lishment protocols; 3. An implementation of Meadows's cost-based framework using Coloured Petri Nets for modelling and evaluating DoS-resistant protocols; and 4. A development of new e±cient and practical DoS-resistant mechanisms to improve the resistance to denial-of-service attacks in key establishment protocols.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

This report fully summarises a project designed to enhance commercial real estate performance within both operational and investment contexts through the development of a model aimed at supporting improved decision-making. The model is based on a risk adjusted discounted cash flow, providing a valuable toolkit for building managers, owners, and potential investors for evaluating individual building performance in terms of financial, social and environmental criteria over the complete life-cycle of the asset. The ‘triple bottom line’ approach to the evaluation of commercial property has much significance for the administrators of public property portfolios in particular. It also has applications more generally for the wider real estate industry given that the advent of ‘green’ construction requires new methods for evaluating both new and existing building stocks. The research is unique in that it focuses on the accuracy of the input variables required for the model. These key variables were largely determined by market-based research and an extensive literature review, and have been fine-tuned with extensive testing. In essence, the project has considered probability-based risk analysis techniques that required market-based assessment. The projections listed in the partner engineers’ building audit reports of the four case study buildings were fed into the property evaluation model developed by the research team. The results are strongly consistent with previously existing, less robust evaluation techniques. And importantly, this model pioneers an approach for taking full account of the triple bottom line, establishing a benchmark for related research to follow. The project’s industry partners expressed a high degree of satisfaction with the project outcomes at a recent demonstration seminar. The project in its existing form has not been geared towards commercial applications but it is anticipated that QDPW and other industry partners will benefit greatly by using this tool for the performance evaluation of property assets. The project met the objectives of the original proposal as well as all the specified milestones. The project has been completed within budget and on time. This research project has achieved the objective by establishing research foci on the model structure, the key input variable identification, the drivers of the relevant property markets, the determinants of the key variables (Research Engine no.1), the examination of risk measurement, the incorporation of risk simulation exercises (Research Engine no.2), the importance of both environmental and social factors and, finally the impact of the triple bottom line measures on the asset (Research Engine no. 3).

Relevância:

80.00% 80.00%

Publicador:

Resumo:

In the study of complex neurobiological movement systems, measurement indeterminacy has typically been overcome by imposing artificial modelling constraints to reduce the number of unknowns (e.g., reducing all muscle, bone and ligament forces crossing a joint to a single vector). However, this approach prevents human movement scientists from investigating more fully the role, functionality and ubiquity of coordinative structures or functional motor synergies. Advancements in measurement methods and analysis techniques are required if the contribution of individual component parts or degrees of freedom of these task-specific structural units is to be established, thereby effectively solving the indeterminacy problem by reducing the number of unknowns. A further benefit of establishing more of the unknowns is that human movement scientists will be able to gain greater insight into ubiquitous processes of physical self-organising that underpin the formation of coordinative structures and the confluence of organismic, environmental and task constraints that determine the exact morphology of these special-purpose devices.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Privacy enhancing protocols (PEPs) are a family of protocols that allow secure exchange and management of sensitive user information. They are important in preserving users’ privacy in today’s open environment. Proof of the correctness of PEPs is necessary before they can be deployed. However, the traditional provable security approach, though well established for verifying cryptographic primitives, is not applicable to PEPs. We apply the formal method of Coloured Petri Nets (CPNs) to construct an executable specification of a representative PEP, namely the Private Information Escrow Bound to Multiple Conditions Protocol (PIEMCP). Formal semantics of the CPN specification allow us to reason about various security properties of PIEMCP using state space analysis techniques. This investigation provides us with preliminary insights for modeling and verification of PEPs in general, demonstrating the benefit of applying the CPN-based formal approach to proving the correctness of PEPs.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

This thesis details methodology to estimate urban stormwater quality based on a set of easy to measure physico-chemical parameters. These parameters can be used as surrogate parameters to estimate other key water quality parameters. The key pollutants considered in this study are nitrogen compounds, phosphorus compounds and solids. The use of surrogate parameter relationships to evaluate urban stormwater quality will reduce the cost of monitoring and so that scientists will have added capability to generate a large amount of data for more rigorous analysis of key urban stormwater quality processes, namely, pollutant build-up and wash-off. This in turn will assist in the development of more stringent stormwater quality mitigation strategies. The research methodology was based on a series of field investigations, laboratory testing and data analysis. Field investigations were conducted to collect pollutant build-up and wash-off samples from residential roads and roof surfaces. Past research has identified that these impervious surfaces are the primary pollutant sources to urban stormwater runoff. A specially designed vacuum system and rainfall simulator were used in the collection of pollutant build-up and wash-off samples. The collected samples were tested for a range of physico-chemical parameters. Data analysis was conducted using both univariate and multivariate data analysis techniques. Analysis of build-up samples showed that pollutant loads accumulated on road surfaces are higher compared to the pollutant loads on roof surfaces. Furthermore, it was found that the fraction of solids smaller than 150 ìm is the most polluted particle size fraction in solids build-up on both roads and roof surfaces. The analysis of wash-off data confirmed that the simulated wash-off process adopted for this research agrees well with the general understanding of the wash-off process on urban impervious surfaces. The observed pollutant concentrations in wash-off from road surfaces were different to pollutant concentrations in wash-off from roof surfaces. Therefore, firstly, the identification of surrogate parameters was undertaken separately for roads and roof surfaces. Secondly, a common set of surrogate parameter relationships were identified for both surfaces together to evaluate urban stormwater quality. Surrogate parameters were identified for nitrogen, phosphorus and solids separately. Electrical conductivity (EC), total organic carbon (TOC), dissolved organic carbon (DOC), total suspended solids (TSS), total dissolved solids (TDS), total solids (TS) and turbidity (TTU) were selected as the relatively easy to measure parameters. Consequently, surrogate parameters for nitrogen and phosphorus were identified from the set of easy to measure parameters for both road surfaces and roof surfaces. Additionally, surrogate parameters for TSS, TDS and TS which are key indicators of solids were obtained from EC and TTU which can be direct field measurements. The regression relationships which were developed for surrogate parameters and key parameter of interest were of a similar format for road and roof surfaces, namely it was in the form of simple linear regression equations. The identified relationships for road surfaces were DTN-TDS:DOC, TP-TS:TOC, TSS-TTU, TDS-EC and TSTTU: EC. The identified relationships for roof surfaces were DTN-TDS and TSTTU: EC. Some of the relationships developed had a higher confidence interval whilst others had a relatively low confidence interval. The relationships obtained for DTN-TDS, DTN-DOC, TP-TS and TS-EC for road surfaces demonstrated good near site portability potential. Currently, best management practices are focussed on providing treatment measures for stormwater runoff at catchment outlets where separation of road and roof runoff is not found. In this context, it is important to find a common set of surrogate parameter relationships for road surfaces and roof surfaces to evaluate urban stormwater quality. Consequently DTN-TDS, TS-EC and TS-TTU relationships were identified as the common relationships which are capable of providing measurements of DTN and TS irrespective of the surface type.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Monitoring Internet traffic is critical in order to acquire a good understanding of threats to computer and network security and in designing efficient computer security systems. Researchers and network administrators have applied several approaches to monitoring traffic for malicious content. These techniques include monitoring network components, aggregating IDS alerts, and monitoring unused IP address spaces. Another method for monitoring and analyzing malicious traffic, which has been widely tried and accepted, is the use of honeypots. Honeypots are very valuable security resources for gathering artefacts associated with a variety of Internet attack activities. As honeypots run no production services, any contact with them is considered potentially malicious or suspicious by definition. This unique characteristic of the honeypot reduces the amount of collected traffic and makes it a more valuable source of information than other existing techniques. Currently, there is insufficient research in the honeypot data analysis field. To date, most of the work on honeypots has been devoted to the design of new honeypots or optimizing the current ones. Approaches for analyzing data collected from honeypots, especially low-interaction honeypots, are presently immature, while analysis techniques are manual and focus mainly on identifying existing attacks. This research addresses the need for developing more advanced techniques for analyzing Internet traffic data collected from low-interaction honeypots. We believe that characterizing honeypot traffic will improve the security of networks and, if the honeypot data is handled in time, give early signs of new vulnerabilities or breakouts of new automated malicious codes, such as worms. The outcomes of this research include: • Identification of repeated use of attack tools and attack processes through grouping activities that exhibit similar packet inter-arrival time distributions using the cliquing algorithm; • Application of principal component analysis to detect the structure of attackers’ activities present in low-interaction honeypots and to visualize attackers’ behaviors; • Detection of new attacks in low-interaction honeypot traffic through the use of the principal component’s residual space and the square prediction error statistic; • Real-time detection of new attacks using recursive principal component analysis; • A proof of concept implementation for honeypot traffic analysis and real time monitoring.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Bayer hydrotalcites prepared using the seawater neutralisation (SWN) process of Bayer liquors are characterised using X-ray diffraction and thermal analysis techniques. The Bayer hydrotalcites are synthesised at four different temperatures (0, 25, 55, 75 °C) to determine the effect on the thermal stability of the hydrotalcite structure, and to identify other precipitates that form at these temperatures. The interlayer distance increased with increasing synthesis temperature, up to 55 °C, and then decreased by 0.14 Å for Bayer hydrotalcites prepared at 75 °C. The three mineralogical phases identified in this investigation are; 1) Bayer hydrotalcite, 2), calcium carbonate species, and 3) hydromagnesite. The DTG curve can be separated into four decomposition steps; 1) the removal of adsorbed water and free interlayer water in hydrotalcite (30 – 230 °C), 2) the dehydroxylation of hydrotalcite and the decarbonation of hydrotalcite (250 – 400 °C), 3) the decarbonation of hydromagnesite (400 – 550 °C), and 4) the decarbonation of aragonite (550 – 650 °C).

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Aims and objectives: The purpose of this study is to explore the social construction of cultural issues in palliative care amongst oncology nurses. ---------- Background: Australia is a nation composed of people from different cultural origins with diverse linguistic, spiritual, religious and social backgrounds. The challenge of working with an increasingly culturally diverse population is a common theme expressed by many healthcare professionals from a variety of countries. ---------- Design: Grounded theory was used to investigate the processes by which nurses provide nursing care to cancer patients from diverse cultural backgrounds. ---------- Methods: Semi-structured interviews with seven Australian oncology nurses provided the data for the study; the data was analysed using grounded theory data analysis techniques. ---------- Results: The core category emerging from the study was that of accommodating cultural needs. This paper focuses on describing the series of subcategories that were identified as factors which could influence the process by which nurses would accommodate cultural needs. These factors included nurses' views and understandings of culture and cultural mores, their philosophy of cultural care, nurses' previous experiences with people from other cultures and organisational approaches to culture and cultural care. ---------- Conclusions: This study demonstrated that previous experiences with people from other cultures and organisational approaches to culture and cultural care often influenced nurses' views and understandings of culture and cultural mores and their beliefs, attitudes and behaviours in providing cultural care. ---------- Relevance to clinical practice: It is imperative to appreciate how nurses' experiences with people from other cultures can be recognised and built upon or, if necessary, challenged. Furthermore, nurses' cultural competence and experiences with people from other cultures need to be further investigated in clinical practice.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Principal Topic: For forward thinking companies, the environment may represent the ''biggest opportunity for enterprise and invention the industrial world has ever seen'' (Cairncross 1990). Increasing awareness of environmental and sustainability issues through media including the promotion of Al Gore's ''An Inconvenient Truth'' has seen increased awareness of environmental and sustainability issues and increased demand for business processes that reduce detrimental environmental impacts of global development (Dean & McMullen 2007). The increased demand for more environmentally sensitive products and services represents an opportunity for the development of ventures that seek to satisfy this demand through entrepreneurial action. As a consequence, increasing recent market developments in renewable energy, carbon emissions, fuel cells, green building, and other sectors suggest an increasing importance of opportunities for environmental entrepreneurship (Dean and McMullen 2007) and increasingly important area of business activity (Schaper 2005). In the last decade in particular, big business has sought to develop a more ''sustainability/ green friendly'' orientation as a response to public pressure and increased government legislation and policy to improve environmental performance (Cohen and Winn 2007). Whilst much of the literature and media is littered with examples of sustainability practices of large firms, nascent and young sustainability firms have only recently begun generating strong research and policy interest (Shepherd, Kuskova and Patzelt 2009): not only for their potential to generate above average financial performance and returns owing to a greater popularity and demand towards sustainability products and services offerings, but also for their intent to lessen environmental impacts, and to provide a more accurate reflection of the ''true cost'' of market offerings taking into account carbon and environmental impacts. More specifically, researchers have suggested that although the previous focus has been on large firms and their impact on the environment, the estimated collective impact of entries and exits of nascent and young firms in development is substantial and could outweigh the combined environmental impact of large companies (Hillary, 2000). Therefore, it may be argued that greater attention should be paid to nascent and young firms and researching sustainability practices, for both their impact in reducing environmental impacts and potential higher financial performance. Whilst acknowledging this research only uses the first wave of a four year longitudinal study of nascent and young firms, it can still begin to provide initial analysis on which to continue further research. The aim of this paper therefore is to provide an overview of the emerging literature in sustainable entrepreneurship and to present some selected preliminary results from the first wave of the data collection, with comparison, where appropriate, of sustainable and firms that do not fulfil this criteria. ''One of the key challenges in evaluating sustainability entrepreneurship is the lack of agreement in how it is defined'' (Schaper, 2005: 10). Some evaluate sustainable entrepreneurs simply as one category of entrepreneurs with little difference between them and traditional entrepreneurs (Dees, 1998). Other research recognises values-based sustainable enterprises requiring a unique perspective (Parrish, 2005). Some see the environmental or sustainable entrepreneurship is a subset of social entrepreneurship (Cohen & Winn, 2007; Dean & McMullen, 2007) whilst others see it as a separate, distinct theory (Archer 2009). Following one of the first definitions of sustainability developed by the Brundtland Commission (1987) we define sustainable entrepreneurship as firms which ''seek to meet the needs and aspirations of the present without compromising the ability to meet those of the future''. ---------- Methodology/Key Propositions: In this exploratory paper we investigate sustainable entrepreneurship using Cohen et al.'s (2008) framework to identify strategies of nascent and young entrepreneurial firms. We use data from The Comprehensive Australian Study of Entrepreneurial Emergence (CAUSEE). This study shares the general empirical approach with PSED studies in the US (Reynolds et al 1994; Reynolds & Curtin 2008). The overall study uses samples of 727 nascent (not yet operational) firms and another 674 young firms, the latter being in an operational stage but less than four years old. To generate the sub sample of sustainability firms, we used content analysis techniques on firm titles, descriptions and product descriptions provided by respondents. Two independent coders used a predefined codebook developed from our review of the sustainability entrepreneurship literature (Cohen et al. 2009) to evaluate the content based on terms such as ''sustainable'' ''eco-friendly'' ''renewable energy'' ''environment'' amongst others. The inter-rater reliability was checked and the Kappa's co-efficient was found to be within the acceptable range (0.746). 85 firms fulfilled the criteria given for inclusion in the sustainability cohort. ---------- Results and Implications: The results for this paper are based on Wave one of the CAUSEE survey which has been completed and the data is available for analysis. It is expected that the findings will assist in beginning to develop an understanding of nascent and young firms that are driven to contribute to a society which is sustainable, not just from an economic perspective (Cohen et al 2008), but from an environmental and social perspective as well. The CAUSEE study provides an opportunity to compare the characteristics of sustainability entrepreneurs with entrepreneurial firms without a stated environmental purpose, which constitutes the majority of the new firms created each year, using a large scale novel longitudinal dataset. The results have implications for Government in the design of better conditions for the creation of new business, firms who assist sustainability in developing better advice programs in line with a better understanding of their needs and requirements, individuals who may be considering becoming entrepreneurs in high potential arenas and existing entrepreneurs make better decisions.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

This work seeks to fill some of the gap existing in the economics and behavioural economics literature pertaining to the decision making process of individuals under extreme environmental situations (life and death events). These essays specifically examine the sinking’s of the R.M.S. Titanic, on 14th April of 1912, and the R.M.S. Lusitania, on 7th May 1915, using econometric (multivariate) analysis techniques. The results show that even under extreme life and death conditions, social norms matter and are reflected in the survival probabilities of individuals onboard the Titanic. However, results from the comparative analysis of the Titanic and Lusitania show that social norms take time to organise and be effective. In the presence of such time constraints, the traditional “homo economicus” model of individual behaviour becomes evident as a survival of the fittest competition.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Physical infrastructure assets are important components of our society and our economy. They are usually designed to last for many years, are expected to be heavily used during their lifetime, carry considerable load, and are exposed to the natural environment. They are also normally major structures, and therefore present a heavy investment, requiring constant management over their life cycle to ensure that they perform as required by their owners and users. Given a complex and varied infrastructure life cycle, constraints on available resources, and continuing requirements for effectiveness and efficiency, good management of infrastructure is important. While there is often no one best management approach, the choice of options is improved by better identification and analysis of the issues, by the ability to prioritise objectives, and by a scientific approach to the analysis process. The abilities to better understand the effect of inputs in the infrastructure life cycle on results, to minimise uncertainty, and to better evaluate the effect of decisions in a complex environment, are important in allocating scarce resources and making sound decisions. Through the development of an infrastructure management modelling and analysis methodology, this thesis provides a process that assists the infrastructure manager in the analysis, prioritisation and decision making process. This is achieved through the use of practical, relatively simple tools, integrated in a modular flexible framework that aims to provide an understanding of the interactions and issues in the infrastructure management process. The methodology uses a combination of flowcharting and analysis techniques. It first charts the infrastructure management process and its underlying infrastructure life cycle through the time interaction diagram, a graphical flowcharting methodology that is an extension of methodologies for modelling data flows in information systems. This process divides the infrastructure management process over time into self contained modules that are based on a particular set of activities, the information flows between which are defined by the interfaces and relationships between them. The modular approach also permits more detailed analysis, or aggregation, as the case may be. It also forms the basis of ext~nding the infrastructure modelling and analysis process to infrastructure networks, through using individual infrastructure assets and their related projects as the basis of the network analysis process. It is recognised that the infrastructure manager is required to meet, and balance, a number of different objectives, and therefore a number of high level outcome goals for the infrastructure management process have been developed, based on common purpose or measurement scales. These goals form the basis of classifYing the larger set of multiple objectives for analysis purposes. A two stage approach that rationalises then weights objectives, using a paired comparison process, ensures that the objectives required to be met are both kept to the minimum number required and are fairly weighted. Qualitative variables are incorporated into the weighting and scoring process, utility functions being proposed where there is risk, or a trade-off situation applies. Variability is considered important in the infrastructure life cycle, the approach used being based on analytical principles but incorporating randomness in variables where required. The modular design of the process permits alternative processes to be used within particular modules, if this is considered a more appropriate way of analysis, provided boundary conditions and requirements for linkages to other modules, are met. Development and use of the methodology has highlighted a number of infrastructure life cycle issues, including data and information aspects, and consequences of change over the life cycle, as well as variability and the other matters discussed above. It has also highlighted the requirement to use judgment where required, and for organisations that own and manage infrastructure to retain intellectual knowledge regarding that infrastructure. It is considered that the methodology discussed in this thesis, which to the author's knowledge has not been developed elsewhere, may be used for the analysis of alternatives, planning, prioritisation of a number of projects, and identification of the principal issues in the infrastructure life cycle.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

In a digital world, users’ Personally Identifiable Information (PII) is normally managed with a system called an Identity Management System (IMS). There are many types of IMSs. There are situations when two or more IMSs need to communicate with each other (such as when a service provider needs to obtain some identity information about a user from a trusted identity provider). There could be interoperability issues when communicating parties use different types of IMS. To facilitate interoperability between different IMSs, an Identity Meta System (IMetS) is normally used. An IMetS can, at least theoretically, join various types of IMSs to make them interoperable and give users the illusion that they are interacting with just one IMS. However, due to the complexity of an IMS, attempting to join various types of IMSs is a technically challenging task, let alone assessing how well an IMetS manages to integrate these IMSs. The first contribution of this thesis is the development of a generic IMS model called the Layered Identity Infrastructure Model (LIIM). Using this model, we develop a set of properties that an ideal IMetS should provide. This idealized form is then used as a benchmark to evaluate existing IMetSs. Different types of IMS provide varying levels of privacy protection support. Unfortunately, as observed by Jøsang et al (2007), there is insufficient privacy protection in many of the existing IMSs. In this thesis, we study and extend a type of privacy enhancing technology known as an Anonymous Credential System (ACS). In particular, we extend the ACS which is built on the cryptographic primitives proposed by Camenisch, Lysyanskaya, and Shoup. We call this system the Camenisch, Lysyanskaya, Shoup - Anonymous Credential System (CLS-ACS). The goal of CLS-ACS is to let users be as anonymous as possible. Unfortunately, CLS-ACS has problems, including (1) the concentration of power to a single entity - known as the Anonymity Revocation Manager (ARM) - who, if malicious, can trivially reveal a user’s PII (resulting in an illegal revocation of the user’s anonymity), and (2) poor performance due to the resource-intensive cryptographic operations required. The second and third contributions of this thesis are the proposal of two protocols that reduce the trust dependencies on the ARM during users’ anonymity revocation. Both protocols distribute trust from the ARM to a set of n referees (n > 1), resulting in a significant reduction of the probability of an anonymity revocation being performed illegally. The first protocol, called the User Centric Anonymity Revocation Protocol (UCARP), allows a user’s anonymity to be revoked in a user-centric manner (that is, the user is aware that his/her anonymity is about to be revoked). The second protocol, called the Anonymity Revocation Protocol with Re-encryption (ARPR), allows a user’s anonymity to be revoked by a service provider in an accountable manner (that is, there is a clear mechanism to determine which entity who can eventually learn - and possibly misuse - the identity of the user). The fourth contribution of this thesis is the proposal of a protocol called the Private Information Escrow bound to Multiple Conditions Protocol (PIEMCP). This protocol is designed to address the performance issue of CLS-ACS by applying the CLS-ACS in a federated single sign-on (FSSO) environment. Our analysis shows that PIEMCP can both reduce the amount of expensive modular exponentiation operations required and lower the risk of illegal revocation of users’ anonymity. Finally, the protocols proposed in this thesis are complex and need to be formally evaluated to ensure that their required security properties are satisfied. In this thesis, we use Coloured Petri nets (CPNs) and its corresponding state space analysis techniques. All of the protocols proposed in this thesis have been formally modeled and verified using these formal techniques. Therefore, the fifth contribution of this thesis is a demonstration of the applicability of CPN and its corresponding analysis techniques in modeling and verifying privacy enhancing protocols. To our knowledge, this is the first time that CPN has been comprehensively applied to model and verify privacy enhancing protocols. From our experience, we also propose several CPN modeling approaches, including complex cryptographic primitives (such as zero-knowledge proof protocol) modeling, attack parameterization, and others. The proposed approaches can be applied to other security protocols, not just privacy enhancing protocols.