916 resultados para Round and square balers
Resumo:
Overweight and obesity are two of the most important emerging public health issues in our time and regarded by the World Health Organisation [WHO] (1998) as a worldwide epidemic. The prevalence of obesity in the USA is the highest in the world, and Australian obesity rates fall into second place. Currently, about 60% of Australian adults are overweight (BMI „d 25kg/m2). The socio-demographic factors associated with overweight and/or obesity have been well demonstrated, but many of the existing studies only examined these relationships at one point of time, and did not examine whether significant relationships changed over time. Furthermore, only limited previous research has examined the issue of the relationship between perception of weight status and actual weight status, as well as factors that may impact on people¡¦s perception of their body weight status. Aims: The aims of the proposed research are to analyse the discrepancy between perceptions of weight status and actual weight status in Australian adults; to examine if there are trends in perceptions of weight status in adults between 1995 to 2004/5; and to propose a range of health promotion strategies and furth er research that may be useful in managing physical activity, healthy diet, and weight reduction. Hypotheses: Four alternate hypotheses are examined by the research: (1) there are associations between independent variables (e.g. socio -demographic factors, physical activity and dietary habits) and overweight and/or obesity; (2) there are associations between the same independent variables and the perception of overweight; (3) there are associations between the same independent variables and the discrepancy between weight status and perception of weight status; and (4) there are trends in overweight and/or obesity, perception of overweight, and the discrepancy in Australian adults from 1995 to 2004/5. Conceptual Framework and Methods: A conceptual framework is developed that shows the associations identified among socio -demographic factors, physical activity and dietary habits with actual weight status, as well as examining perception of weight status. The three latest National Health Survey data bases (1995 , 2001 and 2004/5) were used as the primary data sources. A total of 74,114 Australian adults aged 20 years and over were recruited from these databases. Descriptive statistics, bivariate analyses (One -Way ANOVA tests, unpaired t-tests and Pearson chi-square tests), and multinomial logistic regression modelling were used to analyse the data. Findings: This research reveals that gender, main language spoken at home, occupation status, household structure, private health insurance status, and exercise are related to the discrepancy between actual weight status and perception of weight status, but only gender and exercise are related to the discrepancy across the three time point s. The current research provides more knowledge about perception of weight status independently. Factors which affect perception of overweight are gender, age, language spoken at home, private health insurance status, and diet ary habits. The study also finds that many factors that impact overweight and/or obesity also have an effect on perception of overweight, such as age, language spoken at home, household structure, and exercise. However, some factors (i.e. private health insurance status and milk consumption) only impact on perception of overweight. Furthermore, factors that are rel ated to people’s overweight are not totally related to people’s underestimation of their body weight status in the study results. Thus, there are unknown factors which can affect people’s underestimation of their body weight status. Conclusions: Health promotion and education activities should provide education about population health education and promotion and education for particular at risk sub -groups. Further research should take the form of a longitudinal study design ed to examine the causal relationship between overweight and/or obesity and underestimation of body weight status, it should also place more attention on the relationships between overweight and/or obesity and dietary habits, with a more comprehensive representation of SES. Moreover, further research that deals with identification of characteristics about perception of weight status, in particular the underestimation of body weight status should be undertaken.
Corneal topography with Scheimpflug imaging and videokeratography : comparative study of normal eyes
Resumo:
PURPOSE: To compare the repeatability within anterior corneal topography measurements and agreement between measurements with the Pentacam HR rotating Scheimpflug camera and with a previously validated Placido disk–based videokeratoscope (Medmont E300). ------ SETTING: Contact Lens and Visual Optics Laboratory, School of Optometry, Queensland University of Technology, Brisbane, Queensland, Australia. ----- METHODS: Normal eyes in 101 young adult subjects had corneal topography measured using the Scheimpflug camera (6 repeated measurements) and videokeratoscope (4 repeated measurements). The best-fitting axial power corneal spherocylinder was calculated and converted into power vectors. Corneal higher-order aberrations (HOAs) (up to the 8th Zernike order) were calculated using the corneal elevation data from each instrument. ----- RESULTS: Both instruments showed excellent repeatability for axial power spherocylinder measurements (repeatability coefficients <0.25 diopter; intraclass correlation coefficients >0.9) and good agreement for all power vectors. Agreement between the 2 instruments was closest when the mean of multiple measurements was used in analysis. For corneal HOAs, both instruments showed reasonable repeatability for most aberration terms and good correlation and agreement for many aberrations (eg, spherical aberration, coma, higher-order root mean square). For other aberrations (eg, trefoil and tetrafoil), the 2 instruments showed relatively poor agreement. ----- CONCLUSIONS: For normal corneas, the Scheimpflug system showed excellent repeatability and reasonable agreement with a previously validated videokeratoscope for the anterior corneal axial curvature best-fitting spherocylinder and several corneal HOAs. However, for certain aberrations with higher azimuthal frequencies, the Scheimpflug system had poor agreement with the videokeratoscope; thus, caution should be used when interpreting these corneal aberrations with the Scheimpflug system.
Resumo:
Purpose To investigate static upper eyelid pressure and contact with the ocular surface in a group of young adult subjects. Methods Static upper eyelid pressure was measured for 11 subjects using a piezoresistive pressure sensor attached to a rigid contact lens. Measures of eyelid pressure were derived from an active pressure cell (1.14 mm square) beneath the central upper eyelid margin. To investigate the contact region between the upper eyelid and ocular surface, we used pressure sensitive paper and the lissamine-green staining of Marx’s line. These measures combined with the pressure sensor readings were used to derive estimates of eyelid pressure. Results The mean contact width between the eyelids and ocular surface estimated using pressure sensitive paper was 0.60 ± 0.16 mm, while the mean width of Marx’s line was 0.09 ± 0.02 mm. The mean central upper eyelid pressure was calculated to be 3.8 ± 0.7 mmHg (assuming that the whole pressure cell was loaded), 8.0 ± 3.4 mmHg (derived using the pressure sensitive paper imprint widths) and 55 ± 26 mmHg (based on contact widths equivalent to Marx’s line). Conclusions The pressure sensitive paper measurements suggest that a band of the eyelid margin, significantly larger than the anatomical zone of the eyelid margin known as Marx’s line, has primary contact with the ocular surface. Using these measurements as the contact between the eyelid margin and ocular surface, we believe that the mean pressure of 8.0 ± 3.4 mmHg is the most reliable estimate of static upper eyelid pressure.
Resumo:
This 90 minute panel session is designed to explore issues relating to the teaching of drama, performance studies, and theatre studies within Higher Education. Some of the issues that will be raised include: developing an understanding of the learning that students believe they are experiencing through performance; contemporary models for teaching; and the suggestion that the body can be an important site for acquiring a variety of different knowledges. Paul Makeham will present a general position paper to commence the session (15 minutes). Maryrose Casey, Gillian Kehoul, and Delyse Ryan will each speak briefly (15 minutes) about aspects of their research into Higher Education teaching before opening the floor for a round-table discussion of issues affecting the teaching of these disciplines.
Resumo:
In the quest for shorter time-to-market, higher quality and reduced cost, model-driven software development has emerged as a promising approach to software engineering. The central idea is to promote models to first-class citizens in the development process. Starting from a set of very abstract models in the early stage of the development, they are refined into more concrete models and finally, as a last step, into code. As early phases of development focus on different concepts compared to later stages, various modelling languages are employed to most accurately capture the concepts and relations under discussion. In light of this refinement process, translating between modelling languages becomes a time-consuming and error-prone necessity. This is remedied by model transformations providing support for reusing and automating recurring translation efforts. These transformations typically can only be used to translate a source model into a target model, but not vice versa. This poses a problem if the target model is subject to change. In this case the models get out of sync and therefore do not constitute a coherent description of the software system anymore, leading to erroneous results in later stages. This is a serious threat to the promised benefits of quality, cost-saving, and time-to-market. Therefore, providing a means to restore synchronisation after changes to models is crucial if the model-driven vision is to be realised. This process of reflecting changes made to a target model back to the source model is commonly known as Round-Trip Engineering (RTE). While there are a number of approaches to this problem, they impose restrictions on the nature of the model transformation. Typically, in order for a transformation to be reversed, for every change to the target model there must be exactly one change to the source model. While this makes synchronisation relatively “easy”, it is ill-suited for many practically relevant transformations as they do not have this one-to-one character. To overcome these issues and to provide a more general approach to RTE, this thesis puts forward an approach in two stages. First, a formal understanding of model synchronisation on the basis of non-injective transformations (where a number of different source models can correspond to the same target model) is established. Second, detailed techniques are devised that allow the implementation of this understanding of synchronisation. A formal underpinning for these techniques is drawn from abductive logic reasoning, which allows the inference of explanations from an observation in the context of a background theory. As non-injective transformations are the subject of this research, there might be a number of changes to the source model that all equally reflect a certain target model change. To help guide the procedure in finding “good” source changes, model metrics and heuristics are investigated. Combining abductive reasoning with best-first search and a “suitable” heuristic enables efficient computation of a number of “good” source changes. With this procedure Round-Trip Engineering of non-injective transformations can be supported.
Resumo:
Background It remains unclear over whether it is possible to develop an epidemic forecasting model for transmission of dengue fever in Queensland, Australia. Objectives To examine the potential impact of El Niño/Southern Oscillation on the transmission of dengue fever in Queensland, Australia and explore the possibility of developing a forecast model of dengue fever. Methods Data on the Southern Oscillation Index (SOI), an indicator of El Niño/Southern Oscillation activity, were obtained from the Australian Bureau of Meteorology. Numbers of dengue fever cases notified and the numbers of postcode areas with dengue fever cases between January 1993 and December 2005 were obtained from the Queensland Health and relevant population data were obtained from the Australia Bureau of Statistics. A multivariate Seasonal Auto-regressive Integrated Moving Average model was developed and validated by dividing the data file into two datasets: the data from January 1993 to December 2003 were used to construct a model and those from January 2004 to December 2005 were used to validate it. Results A decrease in the average SOI (ie, warmer conditions) during the preceding 3–12 months was significantly associated with an increase in the monthly numbers of postcode areas with dengue fever cases (β=−0.038; p = 0.019). Predicted values from the Seasonal Auto-regressive Integrated Moving Average model were consistent with the observed values in the validation dataset (root-mean-square percentage error: 1.93%). Conclusions Climate variability is directly and/or indirectly associated with dengue transmission and the development of an SOI-based epidemic forecasting system is possible for dengue fever in Queensland, Australia.
Resumo:
Aim: To measure the influence of spherical intraocular lens implantation and conventional myopic laser in situ keratomileusis on peripheral ocular aberrations. Setting: Visual & Ophthalmic Optics Laboratory, School of Optometry & Institute of Health and Biomedical Innovation, Queensland University of Technology, Brisbane, Australia. Methods: Peripheral aberrations were measured using a modified commercial Hartmann-Shack aberrometer across 42° x 32° of the central visual field in 6 subjects after spherical intraocular lens (IOL) implantation and in 6 subjects after conventional laser in situ keratomileusis (LASIK) for myopia. The results were compared with those of age matched emmetropic and myopic control groups. Results: The IOL group showed a greater rate of quadratic change of spherical equivalent refraction across the visual field, higher spherical aberration, and greater rates of change of higher-order root-mean-square aberrations and total root-mean-square aberrations across the visual field than its emmetropic control group. However, coma trends were similar for the two groups. The LASIK group had a greater rate of quadratic change of spherical equivalent refraction across the visual field, higher spherical aberration, the opposite trend in coma across the field, and greater higher-order root-mean-square aberrations and total root-mean-square aberrations than its myopic control group. Conclusion: Spherical IOL implantation and conventional myopia LASIK increase ocular peripheral aberrations. They cause considerable increase in spherical aberration across the visual field. LASIK reverses the sign of the rate of change in coma across the field relative to that of the other groups. Keywords: refractive surgery, LASIK, IOL implantation, aberrations, peripheral aberrations
Resumo:
Minimizing complexity of group key exchange (GKE) protocols is an important milestone towards their practical deployment. An interesting approach to achieve this goal is to simplify the design of GKE protocols by using generic building blocks. In this paper we investigate the possibility of founding GKE protocols based on a primitive called multi key encapsulation mechanism (mKEM) and describe advantages and limitations of this approach. In particular, we show how to design a one-round GKE protocol which satisfies the classical requirement of authenticated key exchange (AKE) security, yet without forward secrecy. As a result, we obtain the first one-round GKE protocol secure in the standard model. We also conduct our analysis using recent formal models that take into account both outsider and insider attacks as well as the notion of key compromise impersonation resilience (KCIR). In contrast to previous models we show how to model both outsider and insider KCIR within the definition of mutual authentication. Our analysis additionally implies that the insider security compiler by Katz and Shin from ACM CCS 2005 can be used to achieve more than what is shown in the original work, namely both outsider and insider KCIR.
Resumo:
The performance of an adaptive filter may be studied through the behaviour of the optimal and adaptive coefficients in a given environment. This thesis investigates the performance of finite impulse response adaptive lattice filters for two classes of input signals: (a) frequency modulated signals with polynomial phases of order p in complex Gaussian white noise (as nonstationary signals), and (b) the impulsive autoregressive processes with alpha-stable distributions (as non-Gaussian signals). Initially, an overview is given for linear prediction and adaptive filtering. The convergence and tracking properties of the stochastic gradient algorithms are discussed for stationary and nonstationary input signals. It is explained that the stochastic gradient lattice algorithm has many advantages over the least-mean square algorithm. Some of these advantages are having a modular structure, easy-guaranteed stability, less sensitivity to the eigenvalue spread of the input autocorrelation matrix, and easy quantization of filter coefficients (normally called reflection coefficients). We then characterize the performance of the stochastic gradient lattice algorithm for the frequency modulated signals through the optimal and adaptive lattice reflection coefficients. This is a difficult task due to the nonlinear dependence of the adaptive reflection coefficients on the preceding stages and the input signal. To ease the derivations, we assume that reflection coefficients of each stage are independent of the inputs to that stage. Then the optimal lattice filter is derived for the frequency modulated signals. This is performed by computing the optimal values of residual errors, reflection coefficients, and recovery errors. Next, we show the tracking behaviour of adaptive reflection coefficients for frequency modulated signals. This is carried out by computing the tracking model of these coefficients for the stochastic gradient lattice algorithm in average. The second-order convergence of the adaptive coefficients is investigated by modeling the theoretical asymptotic variance of the gradient noise at each stage. The accuracy of the analytical results is verified by computer simulations. Using the previous analytical results, we show a new property, the polynomial order reducing property of adaptive lattice filters. This property may be used to reduce the order of the polynomial phase of input frequency modulated signals. Considering two examples, we show how this property may be used in processing frequency modulated signals. In the first example, a detection procedure in carried out on a frequency modulated signal with a second-order polynomial phase in complex Gaussian white noise. We showed that using this technique a better probability of detection is obtained for the reduced-order phase signals compared to that of the traditional energy detector. Also, it is empirically shown that the distribution of the gradient noise in the first adaptive reflection coefficients approximates the Gaussian law. In the second example, the instantaneous frequency of the same observed signal is estimated. We show that by using this technique a lower mean square error is achieved for the estimated frequencies at high signal-to-noise ratios in comparison to that of the adaptive line enhancer. The performance of adaptive lattice filters is then investigated for the second type of input signals, i.e., impulsive autoregressive processes with alpha-stable distributions . The concept of alpha-stable distributions is first introduced. We discuss that the stochastic gradient algorithm which performs desirable results for finite variance input signals (like frequency modulated signals in noise) does not perform a fast convergence for infinite variance stable processes (due to using the minimum mean-square error criterion). To deal with such problems, the concept of minimum dispersion criterion, fractional lower order moments, and recently-developed algorithms for stable processes are introduced. We then study the possibility of using the lattice structure for impulsive stable processes. Accordingly, two new algorithms including the least-mean P-norm lattice algorithm and its normalized version are proposed for lattice filters based on the fractional lower order moments. Simulation results show that using the proposed algorithms, faster convergence speeds are achieved for parameters estimation of autoregressive stable processes with low to moderate degrees of impulsiveness in comparison to many other algorithms. Also, we discuss the effect of impulsiveness of stable processes on generating some misalignment between the estimated parameters and the true values. Due to the infinite variance of stable processes, the performance of the proposed algorithms is only investigated using extensive computer simulations.
Resumo:
The main goal of this research is to design an efficient compression al~ gorithm for fingerprint images. The wavelet transform technique is the principal tool used to reduce interpixel redundancies and to obtain a parsimonious representation for these images. A specific fixed decomposition structure is designed to be used by the wavelet packet in order to save on the computation, transmission, and storage costs. This decomposition structure is based on analysis of information packing performance of several decompositions, two-dimensional power spectral density, effect of each frequency band on the reconstructed image, and the human visual sensitivities. This fixed structure is found to provide the "most" suitable representation for fingerprints, according to the chosen criteria. Different compression techniques are used for different subbands, based on their observed statistics. The decision is based on the effect of each subband on the reconstructed image according to the mean square criteria as well as the sensitivities in human vision. To design an efficient quantization algorithm, a precise model for distribution of the wavelet coefficients is developed. The model is based on the generalized Gaussian distribution. A least squares algorithm on a nonlinear function of the distribution model shape parameter is formulated to estimate the model parameters. A noise shaping bit allocation procedure is then used to assign the bit rate among subbands. To obtain high compression ratios, vector quantization is used. In this work, the lattice vector quantization (LVQ) is chosen because of its superior performance over other types of vector quantizers. The structure of a lattice quantizer is determined by its parameters known as truncation level and scaling factor. In lattice-based compression algorithms reported in the literature the lattice structure is commonly predetermined leading to a nonoptimized quantization approach. In this research, a new technique for determining the lattice parameters is proposed. In the lattice structure design, no assumption about the lattice parameters is made and no training and multi-quantizing is required. The design is based on minimizing the quantization distortion by adapting to the statistical characteristics of the source in each subimage. 11 Abstract Abstract Since LVQ is a multidimensional generalization of uniform quantizers, it produces minimum distortion for inputs with uniform distributions. In order to take advantage of the properties of LVQ and its fast implementation, while considering the i.i.d. nonuniform distribution of wavelet coefficients, the piecewise-uniform pyramid LVQ algorithm is proposed. The proposed algorithm quantizes almost all of source vectors without the need to project these on the lattice outermost shell, while it properly maintains a small codebook size. It also resolves the wedge region problem commonly encountered with sharply distributed random sources. These represent some of the drawbacks of the algorithm proposed by Barlaud [26). The proposed algorithm handles all types of lattices, not only the cubic lattices, as opposed to the algorithms developed by Fischer [29) and Jeong [42). Furthermore, no training and multiquantizing (to determine lattice parameters) is required, as opposed to Powell's algorithm [78). For coefficients with high-frequency content, the positive-negative mean algorithm is proposed to improve the resolution of reconstructed images. For coefficients with low-frequency content, a lossless predictive compression scheme is used to preserve the quality of reconstructed images. A method to reduce bit requirements of necessary side information is also introduced. Lossless entropy coding techniques are subsequently used to remove coding redundancy. The algorithms result in high quality reconstructed images with better compression ratios than other available algorithms. To evaluate the proposed algorithms their objective and subjective performance comparisons with other available techniques are presented. The quality of the reconstructed images is important for a reliable identification. Enhancement and feature extraction on the reconstructed images are also investigated in this research. A structural-based feature extraction algorithm is proposed in which the unique properties of fingerprint textures are used to enhance the images and improve the fidelity of their characteristic features. The ridges are extracted from enhanced grey-level foreground areas based on the local ridge dominant directions. The proposed ridge extraction algorithm, properly preserves the natural shape of grey-level ridges as well as precise locations of the features, as opposed to the ridge extraction algorithm in [81). Furthermore, it is fast and operates only on foreground regions, as opposed to the adaptive floating average thresholding process in [68). Spurious features are subsequently eliminated using the proposed post-processing scheme.
Resumo:
This thesis deals with the problem of the instantaneous frequency (IF) estimation of sinusoidal signals. This topic plays significant role in signal processing and communications. Depending on the type of the signal, two major approaches are considered. For IF estimation of single-tone or digitally-modulated sinusoidal signals (like frequency shift keying signals) the approach of digital phase-locked loops (DPLLs) is considered, and this is Part-I of this thesis. For FM signals the approach of time-frequency analysis is considered, and this is Part-II of the thesis. In part-I we have utilized sinusoidal DPLLs with non-uniform sampling scheme as this type is widely used in communication systems. The digital tanlock loop (DTL) has introduced significant advantages over other existing DPLLs. In the last 10 years many efforts have been made to improve DTL performance. However, this loop and all of its modifications utilizes Hilbert transformer (HT) to produce a signal-independent 90-degree phase-shifted version of the input signal. Hilbert transformer can be realized approximately using a finite impulse response (FIR) digital filter. This realization introduces further complexity in the loop in addition to approximations and frequency limitations on the input signal. We have tried to avoid practical difficulties associated with the conventional tanlock scheme while keeping its advantages. A time-delay is utilized in the tanlock scheme of DTL to produce a signal-dependent phase shift. This gave rise to the time-delay digital tanlock loop (TDTL). Fixed point theorems are used to analyze the behavior of the new loop. As such TDTL combines the two major approaches in DPLLs: the non-linear approach of sinusoidal DPLL based on fixed point analysis, and the linear tanlock approach based on the arctan phase detection. TDTL preserves the main advantages of the DTL despite its reduced structure. An application of TDTL in FSK demodulation is also considered. This idea of replacing HT by a time-delay may be of interest in other signal processing systems. Hence we have analyzed and compared the behaviors of the HT and the time-delay in the presence of additive Gaussian noise. Based on the above analysis, the behavior of the first and second-order TDTLs has been analyzed in additive Gaussian noise. Since DPLLs need time for locking, they are normally not efficient in tracking the continuously changing frequencies of non-stationary signals, i.e. signals with time-varying spectra. Nonstationary signals are of importance in synthetic and real life applications. An example is the frequency-modulated (FM) signals widely used in communication systems. Part-II of this thesis is dedicated for the IF estimation of non-stationary signals. For such signals the classical spectral techniques break down, due to the time-varying nature of their spectra, and more advanced techniques should be utilized. For the purpose of instantaneous frequency estimation of non-stationary signals there are two major approaches: parametric and non-parametric. We chose the non-parametric approach which is based on time-frequency analysis. This approach is computationally less expensive and more effective in dealing with multicomponent signals, which are the main aim of this part of the thesis. A time-frequency distribution (TFD) of a signal is a two-dimensional transformation of the signal to the time-frequency domain. Multicomponent signals can be identified by multiple energy peaks in the time-frequency domain. Many real life and synthetic signals are of multicomponent nature and there is little in the literature concerning IF estimation of such signals. This is why we have concentrated on multicomponent signals in Part-H. An adaptive algorithm for IF estimation using the quadratic time-frequency distributions has been analyzed. A class of time-frequency distributions that are more suitable for this purpose has been proposed. The kernels of this class are time-only or one-dimensional, rather than the time-lag (two-dimensional) kernels. Hence this class has been named as the T -class. If the parameters of these TFDs are properly chosen, they are more efficient than the existing fixed-kernel TFDs in terms of resolution (energy concentration around the IF) and artifacts reduction. The T-distributions has been used in the IF adaptive algorithm and proved to be efficient in tracking rapidly changing frequencies. They also enables direct amplitude estimation for the components of a multicomponent
Resumo:
Background/Rationale Guided by the need-driven dementia-compromised behavior (NDB) model, this study examined influences of the physical environment on wandering behavior. Methods Using a descriptive, cross-sectional design, 122 wanderers from 28 long-term care (LTC) facilities were videotaped 10 to 12 times; data on wandering, light, sound, temperature and humidity levels, location, ambiance, and crowding were obtained. Associations between environmental variables and wandering were evaluated with chi-square and t tests; the model was evaluated using logistic regression. Results In all, 80% of wandering occurred in the resident’s own room, dayrooms, hallways, or dining rooms. When observed in other residents’ rooms, hallways, shower/baths, or off-unit locations, wanderers were likely (60%-92% of observations) to wander. The data were a good fit to the model overall (LR [logistic regression] χ2 (5) = 50.38, P < .0001) and by wandering type. Conclusions Location, light, sound, proximity of others, and ambiance are associated with wandering and may serve to inform environmental designs and care practices.
Resumo:
A group key exchange (GKE) protocol allows a set of parties to agree upon a common secret session key over a public network. In this thesis, we focus on designing efficient GKE protocols using public key techniques and appropriately revising security models for GKE protocols. For the purpose of modelling and analysing the security of GKE protocols we apply the widely accepted computational complexity approach. The contributions of the thesis to the area of GKE protocols are manifold. We propose the first GKE protocol that requires only one round of communication and is proven secure in the standard model. Our protocol is generically constructed from a key encapsulation mechanism (KEM). We also suggest an efficient KEM from the literature, which satisfies the underlying security notion, to instantiate the generic protocol. We then concentrate on enhancing the security of one-round GKE protocols. A new model of security for forward secure GKE protocols is introduced and a generic one-round GKE protocol with forward security is then presented. The security of this protocol is also proven in the standard model. We also propose an efficient forward secure encryption scheme that can be used to instantiate the generic GKE protocol. Our next contributions are to the security models of GKE protocols. We observe that the analysis of GKE protocols has not been as extensive as that of two-party key exchange protocols. Particularly, the security attribute of key compromise impersonation (KCI) resilience has so far been ignored for GKE protocols. We model the security of GKE protocols addressing KCI attacks by both outsider and insider adversaries. We then show that a few existing protocols are not secure against KCI attacks. A new proof of security for an existing GKE protocol is given under the revised model assuming random oracles. Subsequently, we treat the security of GKE protocols in the universal composability (UC) framework. We present a new UC ideal functionality for GKE protocols capturing the security attribute of contributiveness. An existing protocol with minor revisions is then shown to realize our functionality in the random oracle model. Finally, we explore the possibility of constructing GKE protocols in the attribute-based setting. We introduce the concept of attribute-based group key exchange (AB-GKE). A security model for AB-GKE and a one-round AB-GKE protocol satisfying our security notion are presented. The protocol is generically constructed from a new cryptographic primitive called encapsulation policy attribute-based KEM (EP-AB-KEM), which we introduce in this thesis. We also present a new EP-AB-KEM with a proof of security assuming generic groups and random oracles. The EP-AB-KEM can be used to instantiate our generic AB-GKE protocol.