876 resultados para Almost Convergence
Resumo:
The paper compares three different methods of inclusion of current phasor measurements by phasor measurement units (PMUs) in the conventional power system state estimator. For each of the three methods, comprehensive formulation of the hybrid state estimator in the presence of conventional and PMU measurements is presented. The performance of the state estimator in the presence of conventional measurements and optimally placed PMUs is evaluated in terms of convergence characteristics and estimator accuracy. Test results on the IEEE 14-bus and IEEE 300-bus systems are analyzed to determine the best possible method of inclusion of PMU current phasor measurements.
Resumo:
Screen industries around the globe are evolving. While technological change has been slower to take effect upon the Australian film industry than other creative sectors such as music and publishing, all indications suggest that local screen practices are in a process of fundamental change. Fragmenting audiences, the growth of digital video, distribution and exhibition, the potential for entirely new forms of cultural expression, the proliferation of multi-platforms, and the importance of social networking and viral marketing in promoting products, are challenging traditional approaches to ‘film making’. Moreover, there has been a marked transition in government policy rationales and funding models in recent years, resulting in the most significant overhaul of public finance structures for the film industry in almost 20 years. Film, Cinema, Screen evaluates the Australian film industry’s recent development – particularly in terms of Australian feature film and television series production; it also advocates new approaches to Australian film, and address critical issues around how screen production globally is changing, with implications for local screen industries.
Resumo:
RFID has been widely used in today's commercial and supply chain industry, due to the significant advantages it offers and the relatively low production cost. However, this ubiquitous technology has inherent problems in security and privacy. This calls for the development of simple, efficient and cost effective mechanisms against a variety of security threats. This paper proposes a two-step authentication protocol based on the randomized hash-lock scheme proposed by S. Weis in 2003. By introducing additional measures during the authentication process, this new protocol proves to enhance the security of RFID significantly, and protects the passive tags from almost all major attacks, including tag cloning, replay, full-disclosure, tracking, and eavesdropping. Furthermore, no significant changes to the tags is required to implement this protocol, and the low complexity level of the randomized hash-lock algorithm is retained.
Resumo:
During the past three decades, the subject of fractional calculus (that is, calculus of integrals and derivatives of arbitrary order) has gained considerable popularity and importance, mainly due to its demonstrated applications in numerous diverse and widespread fields in science and engineering. For example, fractional calculus has been successfully applied to problems in system biology, physics, chemistry and biochemistry, hydrology, medicine, and finance. In many cases these new fractional-order models are more adequate than the previously used integer-order models, because fractional derivatives and integrals enable the description of the memory and hereditary properties inherent in various materials and processes that are governed by anomalous diffusion. Hence, there is a growing need to find the solution behaviour of these fractional differential equations. However, the analytic solutions of most fractional differential equations generally cannot be obtained. As a consequence, approximate and numerical techniques are playing an important role in identifying the solution behaviour of such fractional equations and exploring their applications. The main objective of this thesis is to develop new effective numerical methods and supporting analysis, based on the finite difference and finite element methods, for solving time, space and time-space fractional dynamical systems involving fractional derivatives in one and two spatial dimensions. A series of five published papers and one manuscript in preparation will be presented on the solution of the space fractional diffusion equation, space fractional advectiondispersion equation, time and space fractional diffusion equation, time and space fractional Fokker-Planck equation with a linear or non-linear source term, and fractional cable equation involving two time fractional derivatives, respectively. One important contribution of this thesis is the demonstration of how to choose different approximation techniques for different fractional derivatives. Special attention has been paid to the Riesz space fractional derivative, due to its important application in the field of groundwater flow, system biology and finance. We present three numerical methods to approximate the Riesz space fractional derivative, namely the L1/ L2-approximation method, the standard/shifted Gr¨unwald method, and the matrix transform method (MTM). The first two methods are based on the finite difference method, while the MTM allows discretisation in space using either the finite difference or finite element methods. Furthermore, we prove the equivalence of the Riesz fractional derivative and the fractional Laplacian operator under homogeneous Dirichlet boundary conditions – a result that had not previously been established. This result justifies the aforementioned use of the MTM to approximate the Riesz fractional derivative. After spatial discretisation, the time-space fractional partial differential equation is transformed into a system of fractional-in-time differential equations. We then investigate numerical methods to handle time fractional derivatives, be they Caputo type or Riemann-Liouville type. This leads to new methods utilising either finite difference strategies or the Laplace transform method for advancing the solution in time. The stability and convergence of our proposed numerical methods are also investigated. Numerical experiments are carried out in support of our theoretical analysis. We also emphasise that the numerical methods we develop are applicable for many other types of fractional partial differential equations.
Resumo:
This thesis explores a way to inform the architectural design process for contemporary workplace environments. It reports on both theoretical and practical outcomes through an exclusively Australian case study of a network enterprise comprised of collaborative, yet independent business entities. The internet revolution, substantial economic and cultural shifts, and an increased emphasis on lifestyle considerations have prompted a radical re-ordering of organisational relationships and the associated structures, processes, and places of doing business. The social milieu of the information age and the knowledge economy is characterised by an almost instantaneous flow of information and capital. This has culminated in a phenomenon termed by Manuel Castells as the network society, where physical locations are joined together by continuous communication and virtual connectivity. A new spatial logic encompassing redefined concepts of space and distance, and requiring a comprehensive shift in the approach to designing workplace environments for today’s adaptive, collaborative organisations in a dynamic business world, provides the backdrop for this research. Within the duality of space and an augmentation of the traditional notions of place, organisational and institutional structures pose new challenges for the design professions. The literature revealed that there has always been a mono-organisational focus in relation to workplace design strategies. The phenomenon of inter-organisational collaboration has enabled the identification of a gap in the knowledge relative to workplace design. This new context generated the formulation of a unique research construct, the NetWorkPlace™©, which captures the complexity of contemporary employment structures embracing both physical and virtual work environments and practices, and provided the basis for investigating the factors that are shaping and defining interactions within and across networked organisational settings. The methodological orientation and the methods employed follow a qualitative approach and an abductively driven strategy comprising two distinct components, a cross-sectional study of the whole of the network and a longitudinal study, focusing on a single discrete workplace site. The complexity of the context encountered dictated that a multi-dimensional investigative framework was required to be devised. The adoption of a pluralist ontology and the reconfiguration of approaches from traditional paradigms into a collaborative, trans-disciplinary, multi-method epistemology provided an explicit and replicatable method of investigation. The identification and introduction of the NetWorkPlace™© phenomenon, by necessity, spans a number of traditional disciplinary boundaries. Results confirm that in this context, architectural research, and by extension architectural practice, must engage with what other disciplines have to offer. The research concludes that no single disciplinary approach to either research or practice in this area of design can suffice. Pierre Bourdieau’s philosophy of ‘practice’ provides a framework within which the governance and technology structures, together with the mechanisms enabling the production of social order in this context, can be understood. This is achieved by applying the concepts of position and positioning to the corporate power dynamics, and integrating the conflict found to exist between enterprise standard and ferally conceived technology systems. By extending existing theory and conceptions of ‘place’ and the ‘person-environment relationship’, relevant understandings of the tensions created between Castells’ notions of the space of place and the space of flows are established. The trans-disciplinary approach adopted, and underpinned by a robust academic and practical framework, illustrates the potential for expanding the range and richness of understanding applicable to design in this context. The outcome informs workplace design by extending theoretical horizons, and by the development of a comprehensive investigative process comprising a suite of models and techniques for both architectural and interior design research and practice, collectively entitled the NetWorkPlace™© Application Framework. This work contributes to the body of knowledge within the design disciplines in substantive, theoretical, and methodological terms, whilst potentially also influencing future organisational network theories, management practices, and information and communication technology applications. The NetWorkPlace™© as reported in this thesis, constitutes a multi-dimensional concept having the capacity to deal with the fluidity and ambiguity characteristic of the network context, as both a topic of research and the way of going about it.
Resumo:
This article provides a critical review of the literature relevant to the conceptual foundations of health promoting palliative care. It explores the separate emergence and evolution of palliative care and health promotion as distinct concerns in health care, and reviews the early considerations given to their potential convergence. Finally, this article examines the proposal of health promoting palliative care as a specific approach to providing end of life care through a social model of palliative care. Research is needed to explore the impact for communities, health care services and policy when such an approach is implemented within palliative care organisations.
Resumo:
The performance of an adaptive filter may be studied through the behaviour of the optimal and adaptive coefficients in a given environment. This thesis investigates the performance of finite impulse response adaptive lattice filters for two classes of input signals: (a) frequency modulated signals with polynomial phases of order p in complex Gaussian white noise (as nonstationary signals), and (b) the impulsive autoregressive processes with alpha-stable distributions (as non-Gaussian signals). Initially, an overview is given for linear prediction and adaptive filtering. The convergence and tracking properties of the stochastic gradient algorithms are discussed for stationary and nonstationary input signals. It is explained that the stochastic gradient lattice algorithm has many advantages over the least-mean square algorithm. Some of these advantages are having a modular structure, easy-guaranteed stability, less sensitivity to the eigenvalue spread of the input autocorrelation matrix, and easy quantization of filter coefficients (normally called reflection coefficients). We then characterize the performance of the stochastic gradient lattice algorithm for the frequency modulated signals through the optimal and adaptive lattice reflection coefficients. This is a difficult task due to the nonlinear dependence of the adaptive reflection coefficients on the preceding stages and the input signal. To ease the derivations, we assume that reflection coefficients of each stage are independent of the inputs to that stage. Then the optimal lattice filter is derived for the frequency modulated signals. This is performed by computing the optimal values of residual errors, reflection coefficients, and recovery errors. Next, we show the tracking behaviour of adaptive reflection coefficients for frequency modulated signals. This is carried out by computing the tracking model of these coefficients for the stochastic gradient lattice algorithm in average. The second-order convergence of the adaptive coefficients is investigated by modeling the theoretical asymptotic variance of the gradient noise at each stage. The accuracy of the analytical results is verified by computer simulations. Using the previous analytical results, we show a new property, the polynomial order reducing property of adaptive lattice filters. This property may be used to reduce the order of the polynomial phase of input frequency modulated signals. Considering two examples, we show how this property may be used in processing frequency modulated signals. In the first example, a detection procedure in carried out on a frequency modulated signal with a second-order polynomial phase in complex Gaussian white noise. We showed that using this technique a better probability of detection is obtained for the reduced-order phase signals compared to that of the traditional energy detector. Also, it is empirically shown that the distribution of the gradient noise in the first adaptive reflection coefficients approximates the Gaussian law. In the second example, the instantaneous frequency of the same observed signal is estimated. We show that by using this technique a lower mean square error is achieved for the estimated frequencies at high signal-to-noise ratios in comparison to that of the adaptive line enhancer. The performance of adaptive lattice filters is then investigated for the second type of input signals, i.e., impulsive autoregressive processes with alpha-stable distributions . The concept of alpha-stable distributions is first introduced. We discuss that the stochastic gradient algorithm which performs desirable results for finite variance input signals (like frequency modulated signals in noise) does not perform a fast convergence for infinite variance stable processes (due to using the minimum mean-square error criterion). To deal with such problems, the concept of minimum dispersion criterion, fractional lower order moments, and recently-developed algorithms for stable processes are introduced. We then study the possibility of using the lattice structure for impulsive stable processes. Accordingly, two new algorithms including the least-mean P-norm lattice algorithm and its normalized version are proposed for lattice filters based on the fractional lower order moments. Simulation results show that using the proposed algorithms, faster convergence speeds are achieved for parameters estimation of autoregressive stable processes with low to moderate degrees of impulsiveness in comparison to many other algorithms. Also, we discuss the effect of impulsiveness of stable processes on generating some misalignment between the estimated parameters and the true values. Due to the infinite variance of stable processes, the performance of the proposed algorithms is only investigated using extensive computer simulations.
Resumo:
The main goal of this research is to design an efficient compression al~ gorithm for fingerprint images. The wavelet transform technique is the principal tool used to reduce interpixel redundancies and to obtain a parsimonious representation for these images. A specific fixed decomposition structure is designed to be used by the wavelet packet in order to save on the computation, transmission, and storage costs. This decomposition structure is based on analysis of information packing performance of several decompositions, two-dimensional power spectral density, effect of each frequency band on the reconstructed image, and the human visual sensitivities. This fixed structure is found to provide the "most" suitable representation for fingerprints, according to the chosen criteria. Different compression techniques are used for different subbands, based on their observed statistics. The decision is based on the effect of each subband on the reconstructed image according to the mean square criteria as well as the sensitivities in human vision. To design an efficient quantization algorithm, a precise model for distribution of the wavelet coefficients is developed. The model is based on the generalized Gaussian distribution. A least squares algorithm on a nonlinear function of the distribution model shape parameter is formulated to estimate the model parameters. A noise shaping bit allocation procedure is then used to assign the bit rate among subbands. To obtain high compression ratios, vector quantization is used. In this work, the lattice vector quantization (LVQ) is chosen because of its superior performance over other types of vector quantizers. The structure of a lattice quantizer is determined by its parameters known as truncation level and scaling factor. In lattice-based compression algorithms reported in the literature the lattice structure is commonly predetermined leading to a nonoptimized quantization approach. In this research, a new technique for determining the lattice parameters is proposed. In the lattice structure design, no assumption about the lattice parameters is made and no training and multi-quantizing is required. The design is based on minimizing the quantization distortion by adapting to the statistical characteristics of the source in each subimage. 11 Abstract Abstract Since LVQ is a multidimensional generalization of uniform quantizers, it produces minimum distortion for inputs with uniform distributions. In order to take advantage of the properties of LVQ and its fast implementation, while considering the i.i.d. nonuniform distribution of wavelet coefficients, the piecewise-uniform pyramid LVQ algorithm is proposed. The proposed algorithm quantizes almost all of source vectors without the need to project these on the lattice outermost shell, while it properly maintains a small codebook size. It also resolves the wedge region problem commonly encountered with sharply distributed random sources. These represent some of the drawbacks of the algorithm proposed by Barlaud [26). The proposed algorithm handles all types of lattices, not only the cubic lattices, as opposed to the algorithms developed by Fischer [29) and Jeong [42). Furthermore, no training and multiquantizing (to determine lattice parameters) is required, as opposed to Powell's algorithm [78). For coefficients with high-frequency content, the positive-negative mean algorithm is proposed to improve the resolution of reconstructed images. For coefficients with low-frequency content, a lossless predictive compression scheme is used to preserve the quality of reconstructed images. A method to reduce bit requirements of necessary side information is also introduced. Lossless entropy coding techniques are subsequently used to remove coding redundancy. The algorithms result in high quality reconstructed images with better compression ratios than other available algorithms. To evaluate the proposed algorithms their objective and subjective performance comparisons with other available techniques are presented. The quality of the reconstructed images is important for a reliable identification. Enhancement and feature extraction on the reconstructed images are also investigated in this research. A structural-based feature extraction algorithm is proposed in which the unique properties of fingerprint textures are used to enhance the images and improve the fidelity of their characteristic features. The ridges are extracted from enhanced grey-level foreground areas based on the local ridge dominant directions. The proposed ridge extraction algorithm, properly preserves the natural shape of grey-level ridges as well as precise locations of the features, as opposed to the ridge extraction algorithm in [81). Furthermore, it is fast and operates only on foreground regions, as opposed to the adaptive floating average thresholding process in [68). Spurious features are subsequently eliminated using the proposed post-processing scheme.
Resumo:
Obesity represents a major health, social and economic burden to many developing and Westernized communities, with the prevalence increasing at a rate exceeding almost all other medical conditions. Despite major recent advances in our understanding of adipose tissue metabolism and dynamics, we still have limited insight into the regulation of adipose tissue mass in humans. Any significant increase in adipose tissue mass requires proliferation and differentiation of precursor cells (preadipocytes) present in the stromo-vascular compartment of adipose tissue. These processes are very complex and an increasing number of growth factors and hormones have been shown to modulate the expression of genes involved in preadipocyte proliferation and differentiation. A number of transcription factors, including the C/EBP family and PP ARy, have been identified as integral to adipose tissue development and preadipocyte differentiation. Together PP ARy and C/EBPa regulate important events in the activation and maintenance of the terminally differentiated phenotype. The ability of PP ARy to increase transcription through its DNA recognition site is dependent on the binding of ligands. This suggests that an endogenous PP ARy ligand may be an important regulator of adipogenesis. Adipose tissue functions as both the major site of energy storage in the body and as an endocrine organ synthesizing and secreting a number of important molecules involved in regulation of energy balance. For optimum functioning therefore, adipose tissue requires extensive vascularization and previous studies have shown that growth of adipose tissue is preceded by development of a microvascular network. This suggests that paracrine interactions between constituent cells in adipose tissue may be involved in both new capillary formation and fat cell growth. To address this hypothesis the work in this project was aimed at (a) further development of a method for inducing preadipocyte differentiation in subcultured human cells; (b) establishing a method for simultaneous isolation and separate culture of both preadipocytes and microvascular endothelial cells from the same adipose tissue biopsies; (c) to determine, using conditioned medium and co-culture techniques, if endothelial cell-derived factors influence the proliferation and/or differentiation of human preadipocytes; and (d) commence characterization of factors that may be responsible for any observed paracrine effects on aspects of human adipogenesis. Major findings of these studies were as follows: (A) Inclusion of either linoleic acid (a long-chain fatty acid reported to be a naturally occurring ligand for PP ARy) or Rosiglitazone (a member of the thiazolidinedione class of insulin-sensitizing drugs and a synthetic PPARy ligand) in differentiation medium had markedly different effects on preadipocyte differentiation. These studies showed that human preadipocytes have the potential to accumulate triacylglycerol irrespective of their stage of biochemical differentiation, and that thiazolidinediones and fatty acids may exert their adipogenic and lipogenic effects via different biochemical pathways. It was concluded that Rosiglitazone is a more potent inducer of human preadipocyte differentiation than linoleic acid. (B) A method for isolation and culture of both endothelial cells and preadipocytes from the same adipose tissue biopsy was developed. Adipose-derived microvascular endothelial cells were found to produce factor/s, which enhance both proliferation and differentiation of human preadipocytes. (C) The adipogenic effects of microvascular endothelial cells can be mimicked by exposure of preadipocytes to members of the Fibroblast Growth Factor family, specifically ~-ECGF and FGF-1. (D) Co-culture of human preadipocytes with endothelial cells or exposure of preadipocytes to either ~-ECGF or FGF-1 were found to 'prime' human preadipocytes, during their proliferative phase of growth, for thiazolidinedione-induced differentiation. (E) FGF -1 was not found to be acting as a ligand for PP ARy in this system. Findings from this project represent a significant step forward in our understanding of factors involved in growth of human adipose tissue and may lead to the development of therapeutic strategies aimed at modifying the process. Such strategies would have potential clinical utility in the treatment of obesity and obesity related disorders such as Type II Diabetes.
Resumo:
Dasheen mosaic potyvirus (DsMV) is an important virus affecting taro. The virus has been found wherever taro is grown and infects both the edible and ornamental aroids, causing yield losses of up to 60%. The presence of DsMV, and other viruses,prevents the international movement of taro germplasm between countries. This has a significant negative impact on taro production in many countries due to the inability to access improved taro lines produced in breeding programs. To overcome this problem, sensitive and reliable virus diagnostic tests need to be developed to enable the indexing of taro germplasm. The aim of this study was to generate an antiserum against a recombinant DsMV coat protein (CP) and to develop a serological-based diagnostic test that would detect Pacific Island isolates of the virus. The CP-coding region of 16 DsMV isolates from Papua New Guinea, Samoa, Solomon Islands, French Polynesia, New Caledonia and Vietnam were amplified,cloned and sequenced. The size of the CP-coding region ranged from 939 to 1038 nucleotides and encoded putative proteins ranged from 313 to 346 amino acids, with the molecular mass ranging from 34 to 38 kDa. Analysis ofthe amino acid sequences revealed the presence of several amino acid motifs typically found in potyviruses,including DAG, WCIE/DN, RQ and AFDF. When the amino acid sequences were compared with each other and the DsMV sequences on the database, the maximum variability was21.9%. When the core region ofthe CP was analysed, the maximum variability dropped to 6% indicating most variability was present in the N terminus. Within seven PNG isolates ofDsMV, the maximum variability was 16.9% and 3.9% over the entire CP-coding region and core region, respectively. The sequence ofPNG isolate P1 was most similar to all other sequences. Phylogenetic analysis indicated that almost all isolates grouped according to their provenance. Further, the seven PNG isolates were grouped according to the region within PNG from which they were obtained. Due to the extensive variability over the entire CP-coding region, the core region ofthe CP ofPNG isolate Pl was cloned into a protein expression vector and expressed as a recombinant protein. The protein was purified by chromatography and SDS-PAGE and used as an antigen to generate antiserum in a rabbit. In western blots, the antiserum reacted with bands of approximately 45-47 kDa in extracts from purified DsMV and from known DsMV -infected plants from PNG; no bands were observed using healthy plant extracts. The antiserum was subsequently incorporated into an indirect ELISA. This procedure was found to be very sensitive and detected DsMV in sap diluted at least 1:1,000. Using both western blot and ELISA formats,the antiserum was able to detect a wide range ofDsMV isolates including those from Australia, New Zealand, Fiji, French Polynesia, New Caledonia, Papua New Guinea, Samoa, Solomon Islands and Vanuatu. These plants were verified to be infected with DsMV by RT-PCR. In specificity tests, the antiserum was also found to react with sap from plants infected with SCMV, PRSV-P, PRSV-W, but not with PVY or CMV -infected plants.
Resumo:
The stylized facts that motivate this thesis include the diversity in growth patterns that are observed across countries during the process of economic development, and the divergence over time in income distributions both within and across countries. This thesis constructs a dynamic general equilibrium model in which technology adoption is costly and agents are heterogeneous in their initial holdings of resources. Given the households‟ resource level, this study examines how adoption costs influence the evolution of household income over time and the timing of transition to more productive technologies. The analytical results of the model constructed here characterize three growth outcomes associated with the technology adoption process depending on productivity differences between the technologies. These are appropriately labeled as „poverty trap‟, „dual economy‟ and „balanced growth‟. The model is then capable of explaining the observed diversity in growth patterns across countries, as well as divergence of incomes over time. Numerical simulations of the model furthermore illustrate features of this transition. They suggest that that differences in adoption costs account for the timing of households‟ decision to switch technology which leads to a disparity in incomes across households in the technology adoption process. Since this determines the timing of complete adoption of the technology within a country, the implications for cross-country income differences are obvious. Moreover, the timing of technology adoption appears to be impacts on patterns of growth of households, which are different across various income groups. The findings also show that, in the presence of costs associated with the adoption of more productive technologies, inequalities of income and wealth may increase over time tending to delay the convergence in income levels. Initial levels of inequalities in the resources also have an impact on the date of complete adoption of more productive technologies. The issue of increasing income inequality in the process of technology adoption opens up another direction for research. Specifically increasing inequality implies that distributive conflicts may emerge during the transitional process with political- economy consequences. The model is therefore extended to include such issues. Without any political considerations, taxes would leads to a reduction in inequality and convergence of incomes across agents. However this process is delayed if politico-economic influences are taken into account. Moreover, the political outcome is sub optimal. This is essentially due to the fact that there is a resistance associated with the complete adoption of the advanced technology.