911 resultados para Applied identity-based encryption
Resumo:
The material presented in this thesis may be viewed as comprising two key parts, the first part concerns batch cryptography specifically, whilst the second deals with how this form of cryptography may be applied to security related applications such as electronic cash for improving efficiency of the protocols. The objective of batch cryptography is to devise more efficient primitive cryptographic protocols. In general, these primitives make use of some property such as homomorphism to perform a computationally expensive operation on a collective input set. The idea is to amortise an expensive operation, such as modular exponentiation, over the input. Most of the research work in this field has concentrated on its employment as a batch verifier of digital signatures. It is shown that several new attacks may be launched against these published schemes as some weaknesses are exposed. Another common use of batch cryptography is the simultaneous generation of digital signatures. There is significantly less previous work on this area, and the present schemes have some limited use in practical applications. Several new batch signatures schemes are introduced that improve upon the existing techniques and some practical uses are illustrated. Electronic cash is a technology that demands complex protocols in order to furnish several security properties. These typically include anonymity, traceability of a double spender, and off-line payment features. Presently, the most efficient schemes make use of coin divisibility to withdraw one large financial amount that may be progressively spent with one or more merchants. Several new cash schemes are introduced here that make use of batch cryptography for improving the withdrawal, payment, and deposit of electronic coins. The devised schemes apply both to the batch signature and verification techniques introduced, demonstrating improved performance over the contemporary divisible based structures. The solutions also provide an alternative paradigm for the construction of electronic cash systems. Whilst electronic cash is used as the vehicle for demonstrating the relevance of batch cryptography to security related applications, the applicability of the techniques introduced extends well beyond this.
Resumo:
In a digital world, users’ Personally Identifiable Information (PII) is normally managed with a system called an Identity Management System (IMS). There are many types of IMSs. There are situations when two or more IMSs need to communicate with each other (such as when a service provider needs to obtain some identity information about a user from a trusted identity provider). There could be interoperability issues when communicating parties use different types of IMS. To facilitate interoperability between different IMSs, an Identity Meta System (IMetS) is normally used. An IMetS can, at least theoretically, join various types of IMSs to make them interoperable and give users the illusion that they are interacting with just one IMS. However, due to the complexity of an IMS, attempting to join various types of IMSs is a technically challenging task, let alone assessing how well an IMetS manages to integrate these IMSs. The first contribution of this thesis is the development of a generic IMS model called the Layered Identity Infrastructure Model (LIIM). Using this model, we develop a set of properties that an ideal IMetS should provide. This idealized form is then used as a benchmark to evaluate existing IMetSs. Different types of IMS provide varying levels of privacy protection support. Unfortunately, as observed by Jøsang et al (2007), there is insufficient privacy protection in many of the existing IMSs. In this thesis, we study and extend a type of privacy enhancing technology known as an Anonymous Credential System (ACS). In particular, we extend the ACS which is built on the cryptographic primitives proposed by Camenisch, Lysyanskaya, and Shoup. We call this system the Camenisch, Lysyanskaya, Shoup - Anonymous Credential System (CLS-ACS). The goal of CLS-ACS is to let users be as anonymous as possible. Unfortunately, CLS-ACS has problems, including (1) the concentration of power to a single entity - known as the Anonymity Revocation Manager (ARM) - who, if malicious, can trivially reveal a user’s PII (resulting in an illegal revocation of the user’s anonymity), and (2) poor performance due to the resource-intensive cryptographic operations required. The second and third contributions of this thesis are the proposal of two protocols that reduce the trust dependencies on the ARM during users’ anonymity revocation. Both protocols distribute trust from the ARM to a set of n referees (n > 1), resulting in a significant reduction of the probability of an anonymity revocation being performed illegally. The first protocol, called the User Centric Anonymity Revocation Protocol (UCARP), allows a user’s anonymity to be revoked in a user-centric manner (that is, the user is aware that his/her anonymity is about to be revoked). The second protocol, called the Anonymity Revocation Protocol with Re-encryption (ARPR), allows a user’s anonymity to be revoked by a service provider in an accountable manner (that is, there is a clear mechanism to determine which entity who can eventually learn - and possibly misuse - the identity of the user). The fourth contribution of this thesis is the proposal of a protocol called the Private Information Escrow bound to Multiple Conditions Protocol (PIEMCP). This protocol is designed to address the performance issue of CLS-ACS by applying the CLS-ACS in a federated single sign-on (FSSO) environment. Our analysis shows that PIEMCP can both reduce the amount of expensive modular exponentiation operations required and lower the risk of illegal revocation of users’ anonymity. Finally, the protocols proposed in this thesis are complex and need to be formally evaluated to ensure that their required security properties are satisfied. In this thesis, we use Coloured Petri nets (CPNs) and its corresponding state space analysis techniques. All of the protocols proposed in this thesis have been formally modeled and verified using these formal techniques. Therefore, the fifth contribution of this thesis is a demonstration of the applicability of CPN and its corresponding analysis techniques in modeling and verifying privacy enhancing protocols. To our knowledge, this is the first time that CPN has been comprehensively applied to model and verify privacy enhancing protocols. From our experience, we also propose several CPN modeling approaches, including complex cryptographic primitives (such as zero-knowledge proof protocol) modeling, attack parameterization, and others. The proposed approaches can be applied to other security protocols, not just privacy enhancing protocols.
Resumo:
Community-based activism against proposed construction projects is growing. Many protests are poorly managed and escalate into long-term and sometimes acrimonious disputes which damage communities, firms and the construction industry as a whole. Using a thematic storytelling approach which draws on ethnographic method, within a single case study framework, new insights into the social forces that shape and sustain community-based protest against construction projects are provided. A conceptual model of protest movement continuity is presented which highlights the factors that sustain protest continuity over time. The model illustrates how social contagion leads to common community perceptions of development risk and opportunity, to a positive internalization of collective values and identity, to a strategic utilization of social capital and an awareness of the need to manage the emotional dynamics of protest through mechanisms such as symbolic artefacts.
Resumo:
The quick detection of abrupt (unknown) parameter changes in an observed hidden Markov model (HMM) is important in several applications. Motivated by the recent application of relative entropy concepts in the robust sequential change detection problem (and the related model selection problem), this paper proposes a sequential unknown change detection algorithm based on a relative entropy based HMM parameter estimator. Our proposed approach is able to overcome the lack of knowledge of post-change parameters, and is illustrated to have similar performance to the popular cumulative sum (CUSUM) algorithm (which requires knowledge of the post-change parameter values) when examined, on both simulated and real data, in a vision-based aircraft manoeuvre detection problem.
Resumo:
Internet services are important part of daily activities for most of us. These services come with sophisticated authentication requirements which may not be handled by average Internet users. The management of secure passwords for example creates an extra overhead which is often neglected due to usability reasons. Furthermore, password-based approaches are applicable only for initial logins and do not protect against unlocked workstation attacks. In this paper, we provide a non-intrusive identity verification scheme based on behavior biometrics where keystroke dynamics based-on free-text is used continuously for verifying the identity of a user in real-time. We improved existing keystroke dynamics based verification schemes in four aspects. First, we improve the scalability where we use a constant number of users instead of whole user space to verify the identity of target user. Second, we provide an adaptive user model which enables our solution to take the change of user behavior into consideration in verification decision. Next, we identify a new distance measure which enables us to verify identity of a user with shorter text. Fourth, we decrease the number of false results. Our solution is evaluated on a data set which we have collected from users while they were interacting with their mail-boxes during their daily activities.
Resumo:
Previous studies have enabled exact prediction of probabilities of identity-by-descent (IBD) in randommating populations for a few loci (up to four or so), with extension to more using approximate regression methods. Here we present a precise predictor of multiple-locus IBD using simple formulas based on exact results for two loci. In particular, the probability of non-IBD X ABC at each of ordered loci A, B, and C can be well approximated by XABC = XABXBC/XB and generalizes to X123. . .k = X12X23. . .Xk-1,k/ Xk-2, where X is the probability of non-IBD at each locus. Predictions from this chain rule are very precise with population bottlenecks and migration, but are rather poorer in the presence of mutation. From these coefficients, the probabilities of multilocus IBD and non-IBD can also be computed for genomic regions as functions of population size, time, and map distances. An approximate but simple recurrence formula is also developed, which generally is less accurate than the chain rule but is more robust with mutation. Used together with the chain rule it leads to explicit equations for non-IBD in a region. The results can be applied to detection of quantitative trait loci (QTL) by computing the probability of IBD at candidate loci in terms of identity-by-state at neighboring markers.
Resumo:
Iris based identity verification is highly reliable but it can also be subject to attacks. Pupil dilation or constriction stimulated by the application of drugs are examples of sample presentation security attacks which can lead to higher false rejection rates. Suspects on a watch list can potentially circumvent the iris based system using such methods. This paper investigates a new approach using multiple parts of the iris (instances) and multiple iris samples in a sequential decision fusion framework that can yield robust performance. Results are presented and compared with the standard full iris based approach for a number of iris degradations. An advantage of the proposed fusion scheme is that the trade-off between detection errors can be controlled by setting parameters such as the number of instances and the number of samples used in the system. The system can then be operated to match security threat levels. It is shown that for optimal values of these parameters, the fused system also has a lower total error rate.
Resumo:
Authenticated Encryption (AE) is the cryptographic process of providing simultaneous confidentiality and integrity protection to messages. This approach is more efficient than applying a two-step process of providing confidentiality for a message by encrypting the message, and in a separate pass providing integrity protection by generating a Message Authentication Code (MAC). AE using symmetric ciphers can be provided by either stream ciphers with built in authentication mechanisms or block ciphers using appropriate modes of operation. However, stream ciphers have the potential for higher performance and smaller footprint in hardware and/or software than block ciphers. This property makes stream ciphers suitable for resource constrained environments, where storage and computational power are limited. There have been several recent stream cipher proposals that claim to provide AE. These ciphers can be analysed using existing techniques that consider confidentiality or integrity separately; however currently there is no existing framework for the analysis of AE stream ciphers that analyses these two properties simultaneously. This thesis introduces a novel framework for the analysis of AE using stream cipher algorithms. This thesis analyzes the mechanisms for providing confidentiality and for providing integrity in AE algorithms using stream ciphers. There is a greater emphasis on the analysis of the integrity mechanisms, as there is little in the public literature on this, in the context of authenticated encryption. The thesis has four main contributions as follows. The first contribution is the design of a framework that can be used to classify AE stream ciphers based on three characteristics. The first classification applies Bellare and Namprempre's work on the the order in which encryption and authentication processes take place. The second classification is based on the method used for accumulating the input message (either directly or indirectly) into the into the internal states of the cipher to generate a MAC. The third classification is based on whether the sequence that is used to provide encryption and authentication is generated using a single key and initial vector, or two keys and two initial vectors. The second contribution is the application of an existing algebraic method to analyse the confidentiality algorithms of two AE stream ciphers; namely SSS and ZUC. The algebraic method is based on considering the nonlinear filter (NLF) of these ciphers as a combiner with memory. This method enables us to construct equations for the NLF that relate the (inputs, outputs and memory of the combiner) to the output keystream. We show that both of these ciphers are secure from this type of algebraic attack. We conclude that using a keydependent SBox in the NLF twice, and using two different SBoxes in the NLF of ZUC, prevents this type of algebraic attack. The third contribution is a new general matrix based model for MAC generation where the input message is injected directly into the internal state. This model describes the accumulation process when the input message is injected directly into the internal state of a nonlinear filter generator. We show that three recently proposed AE stream ciphers can be considered as instances of this model; namely SSS, NLSv2 and SOBER-128. Our model is more general than a previous investigations into direct injection. Possible forgery attacks against this model are investigated. It is shown that using a nonlinear filter in the accumulation process of the input message when either the input message or the initial states of the register is unknown prevents forgery attacks based on collisions. The last contribution is a new general matrix based model for MAC generation where the input message is injected indirectly into the internal state. This model uses the input message as a controller to accumulate a keystream sequence into an accumulation register. We show that three current AE stream ciphers can be considered as instances of this model; namely ZUC, Grain-128a and Sfinks. We establish the conditions under which the model is susceptible to forgery and side-channel attacks.
Resumo:
This research is an autoethnographic investigation of consumption experiences, public and quasi-public spaces, and their relationship to community within an inner city neighbourhood. The research specifically focuses on the gentrifying inner city, where class-based processes of change can have implications for people’s abilities to remain within, or feel connected to place. However, the thesis draws on broader theories of the throwntogetherness of the contemporary city (e.g., Amin and Thrift, 2002; Massey 2005) to argue that the city is a space where place-based meanings cannot be seen to be fixed, and are instead better understood as events of place – based on ever shifting interrelations between the trajectories of people and things. This perspective argues the experience of belonging to community is not just born of a social encounter, but also draws on the physical and symbolic elements of the context in which it is situated. The thesis particularly explores the ways people construct identifications within this shifting urban environment. As such, consumption practices and spaces offer one important lens through which to explore the interplay of the physical, social and symbolic. Consumer research tells us that consumption practices can facilitate experiences in which identity-defining meaning can be generated and shared. Consumption spaces can also support different kinds of collective identification – as anchoring realms for specific cultural groups or exposure realms that enable individuals to share in the identification practices of others with limited risk (Aubert-Gamet & Cova, 1999). Furthermore, the consumption-based lifestyles that gentrifying inner city neighbourhoods both support and encourage can also mean that consumption practices may be a key reason that people are moving through public space. That is, consumption practices and spaces may provide a purpose for which – and spatial frame against which – our everyday interactions and connections with people and objects are undertaken within such neighbourhoods. The purpose of this investigation then was to delve into the subjectivities at the heart of identifying with places, using the lens of our consumption-based experiences within them. The enquiry describes individual and collective identifications and emotional connections, and explores how these arise within and through our experiences within public and quasi-public spaces. It then theorises these ‘imaginings’ as representative of an experience of community. To do so, it draws on theories of imagination and its relation to community. Theories of imagined community remind us that both the values and identities of community are held together by projections that create relational links out of objects and shared practices (e.g., Benedict Anderson, 2006; Urry, 2000). Drawing on broader theories of the processes of the imagination, this thesis suggests that an interplay between reflexivity and fantasy – which are products of the critical and the fascinated consciousness – plays a role in this imagining of community (e.g., Brann, 1991; Ricoeur, 1994). This thesis therefore seeks to explore how these processes of imagining are implicated within the construction of an experience of belonging to neighbourhood-based community through consumption practices and the public and quasi-public spaces that frame them. The key question of this thesis is how do an individual’s consumption practices work to construct an imagined presence of neighbourhood-based community? Given the focus on public and quasi-public spaces and our experiences within them, the research also asked how do experiences in the public and quasi-public spaces that frame these practices contribute to the construction of this imagined presence? This investigation of imagining community through consumption practices is based on my own experiences of moving to, and attempting to construct community connections within, an inner city neighbourhood in Melbourne, Australia. To do so, I adopted autoethnographic methodology. This is because autoethnography provides the methodological tools through which one can explore and make visible the subjectivities inherent within the lived experiences of interest to the thesis (Ellis, 2004). I describe imagining community through consumption as an extension of a placebased self. This self is manifest through personal identification in consumption spaces that operate as anchoring realms for specific cultural groups, as well as through a broader imagining of spaces, people, and practices as connected through experiences within realms of exposure. However, this is a process that oscillates through cycles of identification; these anchor one within place personally, but also disrupt those attachments. This instability can force one to question the orientation and motives of these imaginings, and reframe them according to different spaces and reference groups in ways that can also work to construct a more anonymous and, conversely, more achievable collective identification. All the while, the ‘I’ at the heart of this identification is in an ongoing process of negotiation, and similarly, the imagined community is never complete. That is, imagining community is a negotiation, with people and spaces – but mostly with the different identifications of the self. This thesis has been undertaken by publication, and thus the process of imagining community is explored and described through four papers. Of these, the first two focus on specific types of consumption spaces – a bar and a shopping centre – and consider the ways that anchoring and exposure within these spaces support the process of imagining community. The third paper examines the ways that the public and quasi-public spaces that make up the broader neighbourhood context are themselves throwntogether as a realm of exposure, and considers the ways this shapes my imaginings of this neighbourhood as community. The final paper develops a theory of imagined community, as a process of comparison and contrast with imagined others, to provide a summative conceptualisation of the first three papers. The first paper, chapter five, explores this process of comparison and contrast in relation to authenticity, which in itself is a subjective assessment of identity. This chapter was written as a direct response to the recent work of Zukin (2010), and draws on theories of authenticity as applied to personal and collective identification practices by consumer researchers Arnould and Price (2000). In this chapter, I describe how my assessments of the authenticity of my anchoring experiences within one specific consumption space, a neighbourhood bar, are evaluated in comparison to my observations of and affective reactions to the social practices of another group of residents in a different consumption space, the local shopping centre. Chapter five also provides an overview of the key sites and experiences that are considered in more detail in the following two chapters. In chapter six, I again draw on my experiences within the bar introduced in chapter five, this time to explore the process of developing a regular identity within a specific consumption space. Addressing the popular theory of the cafe or bar as third place (Oldenburg, 1999), this paper considers the purpose of developing anchored relationships with people within specific consumption spaces, and explores the different ways this may be achieved in an urban context where the mobilities and lifestyle practices of residents complicate the idea of a consumption space as an anchoring or third place. In doing so, this chapter also considers the manner in which this type of regular identification may be seen to be the beginning of the process of imagining community. In chapter seven, I consider the ways the broader public spaces of the neighbourhood work cumulatively to expose different aspects of its identity by following my everyday movements through the neighbourhood’s shopping centre and main street. Drawing on the theories of Urry (2000), Massey (2005), and Amin (2007, 2008), this chapter describes how these spaces operate as exposure realms, enabling the expression of different senses of the neighbourhood’s spaces, times, cultures, and identities through their physical, social, and symbolic elements. Yet they also enable them to be united: through habitual pathways, group practices of appropriation of space, and memory traces that construct connections between objects and experiences. This chapter describes this as a process of exposure to these different elements. Our imagination begins to expand the scope of the frames onto which it projects an imagined presence; it searches for patterns within the physical, social, and symbolic environment and draws connections between people and practices across spaces. As the final paper, chapter eight, deduces, it is in making these connections that one constructs the objects and shared practices of imagined community. This chapter describes this as an imagining of neighbourhood as a place-based extension of the self, and then explores the ways in which I drew on physical, social, and symbolic elements in an attempt to construct a fit between the neighbourhood’s offerings and my desires for place-based identity definition. This was as a cumulative but fragmented process, in which positive and negative experiences of interaction and identification with people and things were searched for their potential to operate as the objects and shared practices of imagined community. This chapter describes these connections as constructed through interplay between reflexivity and fantasy, as the imagination seeks balance between desires for experiences of belonging, and the complexities of constructing them within the throwntogether context of the contemporary city. The conclusion of the thesis describes the process of imagining community as a reflexive fantasy, that is, as a product of both the critical and fascinated consciousness (Ricoeur, 1994). It suggests that the fascinated consciousness imbues experiences with hope and desire, which the reflexive imagining can turn to disappointment and shame as it critically reflects on the reality of those fascinated projections. At the same time, the reflexive imagination also searches the practices of others for affirmation of those projections, effectively seeking to prove the reality of the fantasy of the imagined community.
Resumo:
Reliability of the performance of biometric identity verification systems remains a significant challenge. Individual biometric samples of the same person (identity class) are not identical at each presentation and performance degradation arises from intra-class variability and inter-class similarity. These limitations lead to false accepts and false rejects that are dependent. It is therefore difficult to reduce the rate of one type of error without increasing the other. The focus of this dissertation is to investigate a method based on classifier fusion techniques to better control the trade-off between the verification errors using text-dependent speaker verification as the test platform. A sequential classifier fusion architecture that integrates multi-instance and multisample fusion schemes is proposed. This fusion method enables a controlled trade-off between false alarms and false rejects. For statistically independent classifier decisions, analytical expressions for each type of verification error are derived using base classifier performances. As this assumption may not be always valid, these expressions are modified to incorporate the correlation between statistically dependent decisions from clients and impostors. The architecture is empirically evaluated by applying the proposed architecture for text dependent speaker verification using the Hidden Markov Model based digit dependent speaker models in each stage with multiple attempts for each digit utterance. The trade-off between the verification errors is controlled using the parameters, number of decision stages (instances) and the number of attempts at each decision stage (samples), fine-tuned on evaluation/tune set. The statistical validation of the derived expressions for error estimates is evaluated on test data. The performance of the sequential method is further demonstrated to depend on the order of the combination of digits (instances) and the nature of repetitive attempts (samples). The false rejection and false acceptance rates for proposed fusion are estimated using the base classifier performances, the variance in correlation between classifier decisions and the sequence of classifiers with favourable dependence selected using the 'Sequential Error Ratio' criteria. The error rates are better estimated by incorporating user-dependent (such as speaker-dependent thresholds and speaker-specific digit combinations) and class-dependent (such as clientimpostor dependent favourable combinations and class-error based threshold estimation) information. The proposed architecture is desirable in most of the speaker verification applications such as remote authentication, telephone and internet shopping applications. The tuning of parameters - the number of instances and samples - serve both the security and user convenience requirements of speaker-specific verification. The architecture investigated here is applicable to verification using other biometric modalities such as handwriting, fingerprints and key strokes.
Resumo:
Proxy re-encryption (PRE) is a highly useful cryptographic primitive whereby Alice and Bob can endow a proxy with the capacity to change ciphertext recipients from Alice to Bob, without the proxy itself being able to decrypt, thereby providing delegation of decryption authority. Key-private PRE (KP-PRE) specifies an additional level of confidentiality, requiring pseudo-random proxy keys that leak no information on the identity of the delegators and delegatees. In this paper, we propose a CPA-secure PK-PRE scheme in the standard model (which we then transform into a CCA-secure scheme in the random oracle model). Both schemes enjoy highly desirable properties such as uni-directionality and multi-hop delegation. Unlike (the few) prior constructions of PRE and KP-PRE that typically rely on bilinear maps under ad hoc assumptions, security of our construction is based on the hardness of the standard Learning-With-Errors (LWE) problem, itself reducible from worst-case lattice hard problems that are conjectured immune to quantum cryptanalysis, or “post-quantum”. Of independent interest, we further examine the practical hardness of the LWE assumption, using Kannan’s exhaustive search algorithm coupling with pruning techniques. This leads to state-of-the-art parameters not only for our scheme, but also for a number of other primitives based on LWE published the literature.
Resumo:
An accumulator based on bilinear pairings was proposed at CT-RSA'05. Here, it is first demonstrated that the security model proposed by Lan Nguyen does lead to a cryptographic accumulator that is not collision resistant. Secondly, it is shown that collision-resistance can be provided by updating the adversary model appropriately. Finally, an improvement on Nguyen's identity escrow scheme, with membership revocation based on the accumulator, by removing the trusted third party is proposed.
Resumo:
The majority of research examining massively multiplayer online game (MMOG)-based social relationships has used quantitative methodologies. The present study used qualitative semi-structured interviews with 22 Australian World of Warcraft (WoW) players to examine their experiences of MMOG-based social relationships. Interview transcripts underwent thematic analysis and revealed that participants reported experiencing an MMOG-based sense of community (a sense of belonging within the gaming or WoW community), discussed a number of different MMOG-based social identities (such as gamer, WoW player and guild or group member) and stated that they derived social support (a perception that one is cared for and may access resources from others within a group) from their relationships with other players. The findings of this study confirm that MMOG players can form gaming communities. Almost all participants accessed or provided in-game social support, and some gave or received broader emotional support. Players also identified as gamers and guild members. Fewer participants identified as WoW players. Findings indicated that changes to the game environment influence these relationships and further exploration of players' experiences could determine the optimal game features to enhance positive connections with fellow players.
Resumo:
This project analyses and evaluates the integrity assurance mechanisms used in four Authenticated Encryption schemes based on symmetric block ciphers. These schemes are all cross chaining block cipher modes that claim to provide both confidentiality and integrity assurance simultaneously, in one pass over the data. The investigations include assessing the validity of an existing forgery attack on certain schemes, applying the attack approach to other schemes and implementing the attacks to verify claimed probabilities of successful forgeries. For these schemes, the theoretical basis of the attack was developed, the attack algorithm implemented and computer simulations performed for experimental verification.
Resumo:
In this paper, we present an approach to estimate fractal complexity of discrete time signal waveforms based on computation of area bounded by sample points of the signal at different time resolutions. The slope of best straight line fit to the graph of log(A(rk)A / rk(2)) versus log(l/rk) is estimated, where A(rk) is the area computed at different time resolutions and rk time resolutions at which the area have been computed. The slope quantifies complexity of the signal and it is taken as an estimate of the fractal dimension (FD). The proposed approach is used to estimate the fractal dimension of parametric fractal signals with known fractal dimensions and the method has given accurate results. The estimation accuracy of the method is compared with that of Higuchi's and Sevcik's methods. The proposed method has given more accurate results when compared with that of Sevcik's method and the results are comparable to that of the Higuchi's method. The practical application of the complexity measure in detecting change in complexity of signals is discussed using real sleep electroencephalogram recordings from eight different subjects. The FD-based approach has shown good performance in discriminating different stages of sleep.