868 resultados para Literature analysis
Resumo:
Malcolm Shepherd Knowles was a key writer and theorist in the field of adult education in the United States. He died in 1997 and left a large legacy of books and journal articles. This thesis traced the development of his thinking over the 46-year period from 1950 to 1995. It examined the 25 works authored, co-authored, edited, reissued and revised by him during that period. The writings were scrutinised using a literature research methodology to expose the theoretical content, and a history of thought lens to identify and account for the development of major ideas. The methodology enabled a gradual unfolding of the history. A broadly-consistent and sequential pattern of thought focusing on the notion of andragogy emerged. The study revealed that after the initial phases of exploratory thinking, Knowles developed a practical-theoretical framework he believed could function as a comprehensive theory of adult learning. As his thinking progressed, his theory developed into a unified framework for human resource development and, later, into a model for the development of self-directed lifelong learners. The study traced the development of Knowles’ thinking through the phases of thought, identified the writings that belonged within each phase and produced a series of diagrammatic representations showing the evolution of his conceptual framework. The production of a history of the development of Knowles’ thought is the major outcome of the study. In addition to plotting the narrative sequence of thought-events, the history helps to explicate the factors and conditions that influenced Knowles’ thinking and to show the interrelationships between ideas. The study should help practitioners in their use and appreciation of Knowles’ works.
Resumo:
A growing literature seeks to explain differences in individuals' self-reported satisfaction with their jobs. The evidence so far has mainly been based on cross-sectional data and when panel data have been used, individual unobserved heterogeneity has been modelled as an ordered probit model with random effects. This article makes use of longitudinal data for Denmark, taken from the waves 1995-1999 of the European Community Household Panel, and estimates fixed effects ordered logit models using the estimation methods proposed by Ferrer-i-Carbonel and Frijters (2004) and Das and van Soest (1999). For comparison and testing purposes a random effects ordered probit is also estimated. Estimations are carried out separately on the samples of men and women for individuals' overall satisfaction with the jobs they hold. We find that using the fixed effects approach (that clearly rejects the random effects specification), considerably reduces the number of key explanatory variables. The impact of central economic factors is the same as in previous studies, though. Moreover, the determinants of job satisfaction differ considerably between the genders, in particular once individual fixed effects are allowed for.
Resumo:
Ways in which humans engage with the environment have always provided a rich source of material for writers and illustrators of Australian children's literature. Currently, readers are confronted with a multiplicity of complex, competing and/or complementing networks of ideas, theories and emotions that provide narratives about human engagement with the environment at a particular historical moment. This study, entitled Reading the Environment: Narrative Constructions of Ecological Subjectivities in Australian Children's Literature, examines how a representative sample of Australian texts (19 picture books and 4 novels for children and young adults published between 1995 and 2006) constructs fictional ecological subjects in the texts, and offers readers ecological subject positions inscribed with contemporary environmental ideologies. The conceptual framework developed in this study identifies three ideologically grounded positions that humans may assume when engaging with the environment. None of these positions clearly exists independently of any other, nor are they internally homogeneous. Nevertheless they can be categorised as: (i) human dominion over the environment with little regard for environmental degradation (unrestrained anthropocentrism); (ii) human consideration for the environment driven by understandings that humans need the environment to survive (restrained anthropocentrism); and (iii) human deference towards the environment guided by understandings that humans are no more important than the environment (ecocentrism). iv The transdisciplinary methodological approach to textual analysis used in this thesis draws on ecocriticism, narrative theories, visual semiotics, ecofeminism and postcolonialism to discuss the difficulties and contradictions in the construction of the positions offered. Each chapter of textual analysis focuses on the construction of subjectivities in relation to one of the positions identified in the conceptual framework. Chapter 5 is concerned with how texts highlight the negative consequences of human dominion over the environment, or, in the words of this study, living with ecocatastrophe. Chapter 6 examines representations of restrained anthropocentrism in its contemporary form, that is, sustainability. Chapter 7 examines representations of ecocentrism, a radical position with inherent difficulties of representation. According to the analysis undertaken, the focus texts convey the subtleties and complexities of human engagement with the environment and advocate ways of viewing and responding to contemporary unease about the environment. The study concludes that these ways of viewing and responding conform to and/or challenge dominant socio-cultural and political-economic opinions regarding the environment. This study, the first extended work of its kind, makes an original contribution to ecocritical study of Australian children's literature. By undertaking a comprehensive analysis of how texts for children represent human engagement with the environment at a time when important environmental concerns pose significant threats to human existence, I hope to contribute new knowledge to an area of children's literature research that to date has been significantly under-represented.
Resumo:
The main objective of this PhD was to further develop Bayesian spatio-temporal models (specifically the Conditional Autoregressive (CAR) class of models), for the analysis of sparse disease outcomes such as birth defects. The motivation for the thesis arose from problems encountered when analyzing a large birth defect registry in New South Wales. The specific components and related research objectives of the thesis were developed from gaps in the literature on current formulations of the CAR model, and health service planning requirements. Data from a large probabilistically-linked database from 1990 to 2004, consisting of fields from two separate registries: the Birth Defect Registry (BDR) and Midwives Data Collection (MDC) were used in the analyses in this thesis. The main objective was split into smaller goals. The first goal was to determine how the specification of the neighbourhood weight matrix will affect the smoothing properties of the CAR model, and this is the focus of chapter 6. Secondly, I hoped to evaluate the usefulness of incorporating a zero-inflated Poisson (ZIP) component as well as a shared-component model in terms of modeling a sparse outcome, and this is carried out in chapter 7. The third goal was to identify optimal sampling and sample size schemes designed to select individual level data for a hybrid ecological spatial model, and this is done in chapter 8. Finally, I wanted to put together the earlier improvements to the CAR model, and along with demographic projections, provide forecasts for birth defects at the SLA level. Chapter 9 describes how this is done. For the first objective, I examined a series of neighbourhood weight matrices, and showed how smoothing the relative risk estimates according to similarity by an important covariate (i.e. maternal age) helped improve the model’s ability to recover the underlying risk, as compared to the traditional adjacency (specifically the Queen) method of applying weights. Next, to address the sparseness and excess zeros commonly encountered in the analysis of rare outcomes such as birth defects, I compared a few models, including an extension of the usual Poisson model to encompass excess zeros in the data. This was achieved via a mixture model, which also encompassed the shared component model to improve on the estimation of sparse counts through borrowing strength across a shared component (e.g. latent risk factor/s) with the referent outcome (caesarean section was used in this example). Using the Deviance Information Criteria (DIC), I showed how the proposed model performed better than the usual models, but only when both outcomes shared a strong spatial correlation. The next objective involved identifying the optimal sampling and sample size strategy for incorporating individual-level data with areal covariates in a hybrid study design. I performed extensive simulation studies, evaluating thirteen different sampling schemes along with variations in sample size. This was done in the context of an ecological regression model that incorporated spatial correlation in the outcomes, as well as accommodating both individual and areal measures of covariates. Using the Average Mean Squared Error (AMSE), I showed how a simple random sample of 20% of the SLAs, followed by selecting all cases in the SLAs chosen, along with an equal number of controls, provided the lowest AMSE. The final objective involved combining the improved spatio-temporal CAR model with population (i.e. women) forecasts, to provide 30-year annual estimates of birth defects at the Statistical Local Area (SLA) level in New South Wales, Australia. The projections were illustrated using sixteen different SLAs, representing the various areal measures of socio-economic status and remoteness. A sensitivity analysis of the assumptions used in the projection was also undertaken. By the end of the thesis, I will show how challenges in the spatial analysis of rare diseases such as birth defects can be addressed, by specifically formulating the neighbourhood weight matrix to smooth according to a key covariate (i.e. maternal age), incorporating a ZIP component to model excess zeros in outcomes and borrowing strength from a referent outcome (i.e. caesarean counts). An efficient strategy to sample individual-level data and sample size considerations for rare disease will also be presented. Finally, projections in birth defect categories at the SLA level will be made.
Resumo:
Boards of directors are thought to provide access to a wealth of knowledge and resources for the companies they serve, and are considered important to corporate governance. Under the Resource Based View (RBV) of the firm (Wernerfelt, 1984) boards are viewed as a strategic resource available to firms. As a consequence there has been a significant research effort aimed at establishing a link between board attributes and company performance. In this thesis I explore and extend the study of interlocking directorships (Mizruchi, 1996; Scott 1991a) by examining the links between directors’ opportunity networks and firm performance. Specifically, I use resource dependence theory (Pfeffer & Salancik, 1978) and social capital theory (Burt, 1980b; Coleman, 1988) as the basis for a new measure of a board’s opportunity network. I contend that both directors’ formal company ties and their social ties determine a director’s opportunity network through which they are able to access and mobilise resources for their firms. This approach is based on recent studies that suggest the measurement of interlocks at the director level, rather than at the firm level, may be a more reliable indicator of this phenomenon. This research uses publicly available data drawn from Australia’s top-105 listed companies and their directors in 1999. I employ Social Network Analysis (SNA) (Scott, 1991b) using the UCINET software to analyse the individual director’s formal and social networks. SNA is used to measure a the number of ties a director has to other directors in the top-105 company director network at both one and two degrees of separation, that is, direct ties and indirect (or ‘friend of a friend’) ties. These individual measures of director connectedness are aggregated to produce a board-level network metric for comparison with measures of a firm’s performance using multiple regression analysis. Performance is measured with accounting-based and market-based measures. Findings indicate that better-connected boards are associated with higher market-based company performance (measured by Tobin’s q). However, weaker and mostly unreliable associations were found for accounting-based performance measure ROA. Furthermore, formal (or corporate) network ties are a stronger predictor of market performance than total network ties (comprising social and corporate ties). Similarly, strong ties (connectedness at degree-1) are better predictors of performance than weak ties (connectedness at degree-2). My research makes four contributions to the literature on director interlocks. First, it extends a new way of measuring a board’s opportunity network based on the director rather than the company as the unit of interlock. Second, it establishes evidence of a relationship between market-based measures of firm performance and the connectedness of that firm’s board. Third, it establishes that director’s formal corporate ties matter more to market-based firm performance than their social ties. Fourth, it establishes that director’s strong direct ties are more important to market-based performance than weak ties. The thesis concludes with implications for research and practice, including a more speculative interpretation of these results. In particular, I raise the possibility of reverse causality – that is networked directors seek to join high-performing companies. Thus, the relationship may be a result of symbolic action by companies seeking to increase the legitimacy of their firms rather than a reflection of the social capital available to the companies. This is an important consideration worthy of future investigation.
Resumo:
It is widely held that strong relationships exist between housing, economic status, and well being. This is exemplified by widespread housing stock surpluses in many countries which threaten to destabilise numerous aspects related to individuals and community. However, the position of housing demand and supply is not consistent. The Australian position provides a distinct contrast whereby seemingly inexorable housing demand generally remains a critical issue affecting the socio-economic landscape. Underpinned by high levels of immigration, and further buoyed by sustained historically low interest rates, increasing income levels, and increased government assistance for first home buyers, this strong housing demand ensures elements related to housing affordability continue to gain prominence. A significant, but less visible factor impacting housing affordability – particularly new housing development – relates to holding costs. These costs are in many ways “hidden” and cannot always be easily identified. Although it is only one contributor, the nature and extent of its impact requires elucidation. In its simplest form, it commences with a calculation of the interest or opportunity cost of land holding. However, there is significantly more complexity for major new developments - particularly greenfield property development. Preliminary analysis conducted by the author suggests that even small shifts in primary factors impacting holding costs can appreciably affect housing affordability – and notably, to a greater extent than commonly held. Even so, their importance and perceived high level impact can be gauged from the unprecedented level of attention policy makers have given them over recent years. This may be evidenced by the embedding of specific strategies to address burgeoning holding costs (and particularly those cost savings associated with streamlining regulatory assessment) within statutory instruments such as the Queensland Housing Affordability Strategy, and the South East Queensland Regional Plan. However, several key issues require investigation. Firstly, the computation and methodology behind the calculation of holding costs varies widely. In fact, it is not only variable, but in some instances completely ignored. Secondly, some ambiguity exists in terms of the inclusion of various elements of holding costs, thereby affecting the assessment of their relative contribution. Perhaps this may in part be explained by their nature: such costs are not always immediately apparent. Some forms of holding costs are not as visible as the more tangible cost items associated with greenfield development such as regulatory fees, government taxes, acquisition costs, selling fees, commissions and others. Holding costs are also more difficult to evaluate since for the most part they must be ultimately assessed over time in an ever-changing environment, based on their strong relationship with opportunity cost which is in turn dependant, inter alia, upon prevailing inflation and / or interest rates. By extending research in the general area of housing affordability, this thesis seeks to provide a more detailed investigation of those elements related to holding costs, and in so doing determine the size of their impact specifically on the end user. This will involve the development of soundly based economic and econometric models which seek to clarify the componentry impacts of holding costs. Ultimately, there are significant policy implications in relation to the framework used in Australian jurisdictions that promote, retain, or otherwise maximise, the opportunities for affordable housing.
Resumo:
Spectrum sensing is considered to be one of the most important tasks in cognitive radio. Many sensing detectors have been proposed in the literature, with the common assumption that the primary user is either fully present or completely absent within the window of observation. In reality, there are scenarios where the primary user signal only occupies a fraction of the observed window. This paper aims to analyse the effect of the primary user duty cycle on spectrum sensing performance through the analysis of a few common detectors. Simulations show that the probability of detection degrades severely with reduced duty cycle regardless of the detection method. Furthermore we show that reducing the duty cycle has a greater degradation on performance than lowering the signal strength.
Resumo:
The main goal of this research is to design an efficient compression al~ gorithm for fingerprint images. The wavelet transform technique is the principal tool used to reduce interpixel redundancies and to obtain a parsimonious representation for these images. A specific fixed decomposition structure is designed to be used by the wavelet packet in order to save on the computation, transmission, and storage costs. This decomposition structure is based on analysis of information packing performance of several decompositions, two-dimensional power spectral density, effect of each frequency band on the reconstructed image, and the human visual sensitivities. This fixed structure is found to provide the "most" suitable representation for fingerprints, according to the chosen criteria. Different compression techniques are used for different subbands, based on their observed statistics. The decision is based on the effect of each subband on the reconstructed image according to the mean square criteria as well as the sensitivities in human vision. To design an efficient quantization algorithm, a precise model for distribution of the wavelet coefficients is developed. The model is based on the generalized Gaussian distribution. A least squares algorithm on a nonlinear function of the distribution model shape parameter is formulated to estimate the model parameters. A noise shaping bit allocation procedure is then used to assign the bit rate among subbands. To obtain high compression ratios, vector quantization is used. In this work, the lattice vector quantization (LVQ) is chosen because of its superior performance over other types of vector quantizers. The structure of a lattice quantizer is determined by its parameters known as truncation level and scaling factor. In lattice-based compression algorithms reported in the literature the lattice structure is commonly predetermined leading to a nonoptimized quantization approach. In this research, a new technique for determining the lattice parameters is proposed. In the lattice structure design, no assumption about the lattice parameters is made and no training and multi-quantizing is required. The design is based on minimizing the quantization distortion by adapting to the statistical characteristics of the source in each subimage. 11 Abstract Abstract Since LVQ is a multidimensional generalization of uniform quantizers, it produces minimum distortion for inputs with uniform distributions. In order to take advantage of the properties of LVQ and its fast implementation, while considering the i.i.d. nonuniform distribution of wavelet coefficients, the piecewise-uniform pyramid LVQ algorithm is proposed. The proposed algorithm quantizes almost all of source vectors without the need to project these on the lattice outermost shell, while it properly maintains a small codebook size. It also resolves the wedge region problem commonly encountered with sharply distributed random sources. These represent some of the drawbacks of the algorithm proposed by Barlaud [26). The proposed algorithm handles all types of lattices, not only the cubic lattices, as opposed to the algorithms developed by Fischer [29) and Jeong [42). Furthermore, no training and multiquantizing (to determine lattice parameters) is required, as opposed to Powell's algorithm [78). For coefficients with high-frequency content, the positive-negative mean algorithm is proposed to improve the resolution of reconstructed images. For coefficients with low-frequency content, a lossless predictive compression scheme is used to preserve the quality of reconstructed images. A method to reduce bit requirements of necessary side information is also introduced. Lossless entropy coding techniques are subsequently used to remove coding redundancy. The algorithms result in high quality reconstructed images with better compression ratios than other available algorithms. To evaluate the proposed algorithms their objective and subjective performance comparisons with other available techniques are presented. The quality of the reconstructed images is important for a reliable identification. Enhancement and feature extraction on the reconstructed images are also investigated in this research. A structural-based feature extraction algorithm is proposed in which the unique properties of fingerprint textures are used to enhance the images and improve the fidelity of their characteristic features. The ridges are extracted from enhanced grey-level foreground areas based on the local ridge dominant directions. The proposed ridge extraction algorithm, properly preserves the natural shape of grey-level ridges as well as precise locations of the features, as opposed to the ridge extraction algorithm in [81). Furthermore, it is fast and operates only on foreground regions, as opposed to the adaptive floating average thresholding process in [68). Spurious features are subsequently eliminated using the proposed post-processing scheme.
Resumo:
A group key exchange (GKE) protocol allows a set of parties to agree upon a common secret session key over a public network. In this thesis, we focus on designing efficient GKE protocols using public key techniques and appropriately revising security models for GKE protocols. For the purpose of modelling and analysing the security of GKE protocols we apply the widely accepted computational complexity approach. The contributions of the thesis to the area of GKE protocols are manifold. We propose the first GKE protocol that requires only one round of communication and is proven secure in the standard model. Our protocol is generically constructed from a key encapsulation mechanism (KEM). We also suggest an efficient KEM from the literature, which satisfies the underlying security notion, to instantiate the generic protocol. We then concentrate on enhancing the security of one-round GKE protocols. A new model of security for forward secure GKE protocols is introduced and a generic one-round GKE protocol with forward security is then presented. The security of this protocol is also proven in the standard model. We also propose an efficient forward secure encryption scheme that can be used to instantiate the generic GKE protocol. Our next contributions are to the security models of GKE protocols. We observe that the analysis of GKE protocols has not been as extensive as that of two-party key exchange protocols. Particularly, the security attribute of key compromise impersonation (KCI) resilience has so far been ignored for GKE protocols. We model the security of GKE protocols addressing KCI attacks by both outsider and insider adversaries. We then show that a few existing protocols are not secure against KCI attacks. A new proof of security for an existing GKE protocol is given under the revised model assuming random oracles. Subsequently, we treat the security of GKE protocols in the universal composability (UC) framework. We present a new UC ideal functionality for GKE protocols capturing the security attribute of contributiveness. An existing protocol with minor revisions is then shown to realize our functionality in the random oracle model. Finally, we explore the possibility of constructing GKE protocols in the attribute-based setting. We introduce the concept of attribute-based group key exchange (AB-GKE). A security model for AB-GKE and a one-round AB-GKE protocol satisfying our security notion are presented. The protocol is generically constructed from a new cryptographic primitive called encapsulation policy attribute-based KEM (EP-AB-KEM), which we introduce in this thesis. We also present a new EP-AB-KEM with a proof of security assuming generic groups and random oracles. The EP-AB-KEM can be used to instantiate our generic AB-GKE protocol.
Resumo:
Sustainability has been increasingly recognised as an integral part of highway infrastructure development. In practice however, the fact that financial return is still a project’s top priority for many, environmental aspects tend to be overlooked or considered as a burden, as they add to project costs. Sustainability and its implications have a far-reaching effect on each project over time. Therefore, with highway infrastructure’s long-term life span and huge capital demand, the consideration of environmental cost/ benefit issues is more crucial in life-cycle cost analysis (LCCA). To date, there is little in existing literature studies on viable estimation methods for environmental costs. This situation presents the potential for focused studies on environmental costs and issues in the context of life-cycle cost analysis. This paper discusses a research project which aims to integrate the environmental cost elements and issues into a conceptual framework for life cycle costing analysis for highway projects. Cost elements and issues concerning the environment were first identified through literature. Through questionnaires, these environmental cost elements will be validated by practitioners before their consolidation into the extension of existing and worked models of life-cycle costing analysis (LCCA). A holistic decision support framework is being developed to assist highway infrastructure stakeholders to evaluate their investment decision. This will generate financial returns while maximising environmental benefits and sustainability outcome.
Resumo:
Poor patient compliance with peritoneal dialysis (PD) has significant adverse effects on morbidity and mortality rates in individuals with chronic kidney disease (CKD). It also adds to the resource burdens of healthcare services and providers. This paper explores the notion of PD compliance in patients with CKD with reference to the relevant published literature. The analysis of the literature reveals that ‘PD compliance’ is a complex and challenging construct for both patients and health professionals. There is no universal definition of compliance that is widely adopted in practice and research, and therefore a lack of consensus on how to determine ‘compliant’ patient outcomes. There are also multiple and interconnected determinants of PD compliance that are context-bound, which healthcare professionals must be aware of, and which makes producing consensus of measuring PD compliance difficult. The complexity of the interventions required to produce even a modest improvement in PD compliance, which are described in this paper, are significant. Compliance with PD and other treatments for CKD is a multidimensional, context-bound concept, that to date has tended to efface the role and needs of the renal patient. We conclude the paper with the implications for contemporary practice.
Resumo:
Research Question/Issue: Over the last four decades, research on the relationship between boards of directors and strategy has proliferated. Yet to date there is little theoretical and empirical agreement regarding the question of how boards of directors contribute to strategy. This review assesses the extant literature by highlighting emerging trends and identifying several avenues for future research. Research Findings/Results: Using a content-analysis of 150 articles published in 23 management journals up to 2007, we describe and analyze how research on boards of directors and strategy has evolved over time. We illustrate how topics, theories, settings, and sources of data interact and influence insights about board–strategy relationships during three specific periods. Theoretical Implications: Our study illustrates that research on boards of directors and strategy evolved from normative and structural approaches to behavioral and cognitive approaches. Our results encourage future studies to examine the impact of institutional and context-specific factors on the (expected) contribution of boards to strategy, and to apply alternative methods to fully capture the impact of board processes and dynamics on strategy making. Practical Implications: The increasing interest in boards of directors’ contribution to strategy echoes a movement towards more strategic involvement of boards of directors. However, best governance practices and the emphasis on board independence and control may hinder the board contribution to the strategic decision making. Our study invites investors and policy-makers to consider the requirements for an effective strategic task when they nominate board members and develop new regulations.
Resumo:
In a competitive environment, companies continuously innovate to offer superior services at lower costs. ‘Shared Services’ have been extensively adopted in practice as a means for improving organizational performance. Shared Services are considered most appropriate for support functions and are widely adopted in human resource management, finance and accounting, and more recently employed as an information systems (IS) function. As computer-based corporate information systems have become de facto and the backbone of administrative systems, the technical impediments to sharing have come down dramatically. As this trend continues, CIOs and IT professionals need a deeper understanding of the Shared Services phenomenon. Yet, analysis of IS academic literature reveals that Shared Services, though mentioned in more than 100 articles, has received little in depth attention. This paper investigates the current status of Shared Services in IS literature. The authors present a detailed review of literature from main IS journals and conferences. The paper concludes with a tentative operational definition, a list of perceived main objectives of Shared Services, and an agenda for related future research.
Resumo:
Purpose - The purpose of this paper is to introduce a knowledge-based urban development assessment framework, which has been constructed in order to evaluate and assist in the (re)formulation of local and regional policy frameworks and applications necessary in knowledge city transformations. Design/methodology/approach - The research reported in this paper follows a methodological approach that includes a thorough review of the literature, development of an assessment framework in order to inform policy-making by accurately evaluating knowledge-based development levels of cities, and application of this framework in a comparative study - Boston, Vancouver, Melbourne and Manchester. Originality/value - The paper, with its assessment framework, demonstrates an innovative way of examining the knowledge-based development capacity of cities by scrutinising their economic, socio-cultural, enviro-urban and institutional development mechanisms and capabilities. Practical implications - The paper introduces a framework developed to assess the knowledge-based development levels of cities; presents some of the generic indicators used to evaluate knowledge-based development performance of cities; demonstrates how a city can benchmark its development level against that of other cities, and; provides insights for achieving a more sustainable and knowledge-based development.