981 resultados para Order of Christ.


Relevância:

100.00% 100.00%

Publicador:

Resumo:

Following the intrinsically linked balance sheets in his Capital Formation Life Cycle, Lukas M. Stahl explains with his Triple A Model of Accounting, Allocation and Accountability the stages of the Capital Formation process from FIAT to EXIT. Based on the theoretical foundations of legal risk laid by the International Bar Association with the help of Roger McCormick and legal scholars such as Joanna Benjamin, Matthew Whalley and Tobias Mahler, and founded on the basis of Wesley Hohfeld’s category theory of jural relations, Stahl develops his mutually exclusive Four Determinants of Legal Risk of Law, Lack of Right, Liability and Limitation. Those Four Determinants of Legal Risk allow us to apply, assess, and precisely describe the respective legal risk at all stages of the Capital Formation Life Cycle as demonstrated in case studies of nine industry verticals of the proposed and currently negotiated Transatlantic Trade and Investment Partnership between the United States of America and the European Union, TTIP, as well as in the case of the often cited financing relation between the United States and the People’s Republic of China. Having established the Four Determinants of Legal Risk and its application to the Capital Formation Life Cycle, Stahl then explores the theoretical foundations of capital formation, their historical basis in classical and neo-classical economics and its forefathers such as The Austrians around Eugen von Boehm-Bawerk, Ludwig von Mises and Friedrich von Hayek and most notably and controversial, Karl Marx, and their impact on today’s exponential expansion of capital formation. Starting off with the first pillar of his Triple A Model, Accounting, Stahl then moves on to explain the Three Factors of Capital Formation, Man, Machines and Money and shows how “value-added” is created with respect to the non-monetary capital factors of human resources and industrial production. Followed by a detailed analysis discussing the roles of the Three Actors of Monetary Capital Formation, Central Banks, Commercial Banks and Citizens Stahl readily dismisses a number of myths regarding the creation of money providing in-depth insight into the workings of monetary policy makers, their institutions and ultimate beneficiaries, the corporate and consumer citizens. In his second pillar, Allocation, Stahl continues his analysis of the balance sheets of the Capital Formation Life Cycle by discussing the role of The Five Key Accounts of Monetary Capital Formation, the Sovereign, Financial, Corporate, Private and International account of Monetary Capital Formation and the associated legal risks in the allocation of capital pursuant to his Four Determinants of Legal Risk. In his third pillar, Accountability, Stahl discusses the ever recurring Crisis-Reaction-Acceleration-Sequence-History, in short: CRASH, since the beginning of the millennium starting with the dot-com crash at the turn of the millennium, followed seven years later by the financial crisis of 2008 and the dislocations in the global economy we are facing another seven years later today in 2015 with several sordid debt restructurings under way and hundred thousands of refugees on the way caused by war and increasing inequality. Together with the regulatory reactions they have caused in the form of so-called landmark legislation such as the Sarbanes-Oxley Act of 2002, the Dodd-Frank Act of 2010, the JOBS Act of 2012 or the introduction of the Basel Accords, Basel II in 2004 and III in 2010, the European Financial Stability Facility of 2010, the European Stability Mechanism of 2012 and the European Banking Union of 2013, Stahl analyses the acceleration in size and scope of crises that appears to find often seemingly helpless bureaucratic responses, the inherent legal risks and the complete lack of accountability on part of those responsible. Stahl argues that the order of the day requires to address the root cause of the problems in the form of two fundamental design defects of our Global Economic Order, namely our monetary and judicial order. Inspired by a 1933 plan of nine University of Chicago economists abolishing the fractional reserve system, he proposes the introduction of Sovereign Money as a prerequisite to void misallocations by way of judicial order in the course of domestic and transnational insolvency proceedings including the restructuring of sovereign debt throughout the entire monetary system back to its origin without causing domino effects of banking collapses and failed financial institutions. In recognizing Austrian-American economist Schumpeter’s Concept of Creative Destruction, as a process of industrial mutation that incessantly revolutionizes the economic structure from within, incessantly destroying the old one, incessantly creating a new one, Stahl responds to Schumpeter’s economic chemotherapy with his Concept of Equitable Default mimicking an immunotherapy that strengthens the corpus economicus own immune system by providing for the judicial authority to terminate precisely those misallocations that have proven malignant causing default perusing the century old common law concept of equity that allows for the equitable reformation, rescission or restitution of contract by way of judicial order. Following a review of the proposed mechanisms of transnational dispute resolution and current court systems with transnational jurisdiction, Stahl advocates as a first step in order to complete the Capital Formation Life Cycle from FIAT, the creation of money by way of credit, to EXIT, the termination of money by way of judicial order, the institution of a Transatlantic Trade and Investment Court constituted by a panel of judges from the U.S. Court of International Trade and the European Court of Justice by following the model of the EFTA Court of the European Free Trade Association. Since the first time his proposal has been made public in June of 2014 after being discussed in academic circles since 2011, his or similar proposals have found numerous public supporters. Most notably, the former Vice President of the European Parliament, David Martin, has tabled an amendment in June 2015 in the course of the negotiations on TTIP calling for an independent judicial body and the Member of the European Commission, Cecilia Malmström, has presented her proposal of an International Investment Court on September 16, 2015. Stahl concludes, that for the first time in the history of our generation it appears that there is a real opportunity for reform of our Global Economic Order by curing the two fundamental design defects of our monetary order and judicial order with the abolition of the fractional reserve system and the introduction of Sovereign Money and the institution of a democratically elected Transatlantic Trade and Investment Court that commensurate with its jurisdiction extending to cases concerning the Transatlantic Trade and Investment Partnership may complete the Capital Formation Life Cycle resolving cases of default with the transnational judicial authority for terminal resolution of misallocations in a New Global Economic Order without the ensuing dangers of systemic collapse from FIAT to EXIT.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This study examines the pluralistic hypothesis advanced by the late Professor John Hick viz. that all religious faiths provide equally salvific pathways to God, irrespective of their theological and doctrinal differences. The central focus of the study is a critical examination of (a) the epistemology of religious experience as advanced by Professor Hick, (b) the ontological status of the being he understands to be God, and further asks (c) to what extent can the pluralistic view of religious experience be harmonised with the experience with which the Christian life is understood to begin viz. regeneration. Tracing the theological journey of Professor Hick from fundamentalist Christian to religious pluralist, the study notes the reasons given for Hick’s gradual disengagement from the Christian faith. In addition to his belief that the pre-scientific worldview of the Bible was obsolete and passé, Hick took the view that modern biblical scholarship could not accommodate traditionally held Christian beliefs. He conceded that the Incarnation, if true, would be decisive evidence for the uniqueness of Christianity, but rejected the same on the grounds of logical incoherence. This study affirms the view that the doctrine of the Incarnation occupies a place of crucial importance within world religion, but rejects the claim of incoherence. Professor Hick believed that God’s Spirit was at work in all religions, producing a common religious experience, or spiritual awakening to God. The soteriological dimension of this spiritual awakening, he suggests, finds expression as the worshipper turns away from self-centredness to the giving of themselves to God and others. At the level of epistemology he further argued that religious experience itself provided the rational basis for belief in God. The study supports the assertion by Professor Hick that religious experience itself ought to be trusted as a source of knowledge and this on the principle of credulity, which states that a person’s claim to perceive or experience something is prima facie justified, unless there are compelling reasons to the contrary. Hick’s argument has been extensively developed and defended by philosophers such as Alvin Plantinga and William Alston. This confirms the importance of Hick’s contribution to the philosophy of religion, and further establishes his reputation within the field as an original thinker. It is recognised in this thesis, however, that in affirming only the rationality of belief, but not the obligation to believe, Professor Hick’s epistemology is not fully consistent with a Christian theology of revelation. Christian theology views the created order as pre-interpreted and unambiguous in its testimony to God’s existence. To disbelieve in God’s existence is to violate one’s epistemic duty by suppressing the truth. Professor Hick’s critical realist principle, which he regards as the key to understanding what is happening in the different forms of religious experience, is examined within this thesis. According to the critical realist principle, there are realities external to us, yet we are never aware of them as they are in themselves, but only as they appear to us within our particular cognitive machinery and conceptual resources. All awareness of God is interpreted through the lens of pre-existing, culturally relative religious forms, which in turn explains the differing theologies within the world of religion. The critical realist principle views God as unknowable, in the sense that his inner nature is beyond the reach of human conceptual categories and linguistic systems. Professor Hick thus endorses and develops the view of God as ineffable, but employs the term transcategorial when speaking of God’s ineffability. The study takes the view that the notion of transcategoriality as developed by Professor Hick appears to deny any ontological status to God, effectively arguing him out of existence. Furthermore, in attributing the notion of transcategoriality to God, Professor Hick would appear to render incoherent his own fundamental assertion that we can know nothing of God that is either true or false. The claim that the experience of regeneration with which the Christian life begins can be classed as a mere species of the genus common throughout all faiths, is rejected within this thesis. Instead it is argued that Christian regeneration is a distinctive experience that cannot be reduced to a salvific experience, defined merely as an awareness of, or awakening to, God, followed by a turning away from self to others. Professor Hick argued against any notion that the Christian community was the social grouping through which God’s Spirit was working in an exclusively redemptive manner. He supported his view by drawing attention to (a) the presence, at times, of comparable or higher levels of morality in world religion, when contrasted with that evidenced by the followers of Christ, and (b) the presence, at times, of demonstrably lower levels of morality in the followers of Christ, when contrasted with the lives of other religious devotees. These observations are fully supported, but the conclusion reached is rejected, on the grounds that according to Christian theology the saving work of God’s Spirit is evidenced in a life that is changing from what it was before. Christian theology does not suggest or demand that such lives at every stage be demonstrably superior, when contrasted with other virtuous or morally upright members of society. The study concludes by paying tribute to the contribution Professor Hick has made to the field of the epistemology of religious experience.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Assessment and prediction of the impact of vehicular traffic emissions on air quality and exposure levels requires knowledge of vehicle emission factors. The aim of this study was quantification of emission factors from an on road, over twelve months measurement program conducted at two sites in Brisbane: 1) freeway type (free flowing traffic at about 100 km/h, fleet dominated by small passenger cars - Tora St); and 2) urban busy road with stop/start traffic mode, fleet comprising a significant fraction of heavy duty vehicles - Ipswich Rd. A physical model linking concentrations measured at the road for specific meteorological conditions with motor vehicle emission factors was applied for data analyses. The focus of the study was on submicrometer particles; however the measurements also included supermicrometer particles, PM2.5, carbon monoxide, sulfur dioxide, oxides of nitrogen. The results of the study are summarised in this paper. In particular, the emission factors for submicrometer particles were 6.08 x 1013 and 5.15 x 1013 particles per vehicle-1 km-1 for Tora St and Ipswich Rd respectively and for supermicrometer particles for Tora St, 1.48 x 109 particles per vehicle-1 km-1. Emission factors of diesel vehicles at both sites were about an order of magnitude higher than emissions from gasoline powered vehicles. For submicrometer particles and gasoline vehicles the emission factors were 6.08 x 1013 and 4.34 x 1013 particles per vehicle-1 km-1 for Tora St and Ipswich Rd, respectively, and for diesel vehicles were 5.35 x 1014 and 2.03 x 1014 particles per vehicle-1 km-1 for Tora St and Ipswich Rd, respectively. For supermicrometer particles at Tora St the emission factors were 2.59 x 109 and 1.53 x 1012 particles per vehicle-1 km-1, for gasoline and diesel vehicles, respectively.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

This is the lead article for an issue of M/C Journal on the theme ‘obsolete.’ It uses the history of the International Journal of Cultural Studies (of which the author has been editor since 1997) to investigate technological innovations and their scholarly implications in academic journal publishing; in particular the obsolescence of the print form. Print-based elements like cover-design, the running order of articles, special issues, refereeing and the reading experience are all rendered obsolete with the growth of online access to individual articles. The paper argues that individuation of reading choices may be accompanied by less welcome tendencies, such as a decline in collegiality, disciplinary innovation, and trust.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

We assess the increase in particle number emissions from motor vehicles driving at steady speed when forced to stop and accelerate from rest. Considering the example of a signalized pedestrian crossing on a two-way single-lane urban road, we use a complex line source method to calculate the total emissions produced by a specific number and mix of light petrol cars and diesel passenger buses and show that the total emissions during a red light is significantly higher than during the time when the light remains green. Replacing two cars with one bus increased the emissions by over an order of magnitude. Considering these large differences, we conclude that the importance attached to particle number emissions in traffic management policies be reassessed in the future.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

The aim of this work was to investigate ultrafine particles (< 0.1 μm) in primary school classrooms, in relation to the classrooms activities. The investigations were conducted in three classrooms during two measuring campaigns, which together encompassed a period of 60 days. Initial investigations showed that under the normal operating conditions of the school there were many occasions in all three classrooms where indoor particle concentrations increased significantly compared to outdoor levels. By far the highest increases in the classroom resulted from art activities (painting, gluing and drawing), at times reaching over 1.4 x 105 particle cm-3. The indoor particle concentrations exceeded outdoor concentrations by approximately one order of magnitude, with a count median diameter ranging from 20-50 nm. Significant increases also occurred during cleaning activities, when detergents were used. GC-MS analysis conducted on 4 samples randomly selected from about 30 different paints and glues, as well as the detergent used in the school, showed that d-limonene was one of the main organic compounds of the detergent, however, it was not detected in the samples of the paints and the glue. Controlled experiments showed that this monoterpene, emitted from the detergent, reacted with O3 (at outdoor ambient concentrations ranging from 0.06-0.08ppm) and formed secondary organic aerosols. Further investigations to identify other liquids which may be potential sources of the precursors of secondary organic aerosols, were outside the scope of this project, however, it is expected that the problem identified by this study could be more widely spread, since most primary schools use liquid materials for art classes, and all schools use detergents for cleaning. Further studies are therefore recommended to better understand this phenomenon and also to minimize school children exposure to ultrafine particles from these indoor sources.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

The interactions of phenyldithioesters with gold nanoparticles (AuNPs) have been studied by monitoring changes in the surface plasmon resonance (SPR), depolarised light scattering, and surface enhanced Raman spectroscopy (SERS). Changes in the SPR indicated that an AuNP-phenyldithioester charge transfer complex forms in equilibrium with free AuNPs and phenyldithioester. Analysis of the Langmuir binding isotherms indicated that the equilibrium adsorption constant, Kads, was 2.3 ± 0.1 × 106 M−1, which corresponded to a free energy of adsorption of 36 ± 1 kJ mol−1. These values are comparable to those reported for interactions of aryl thiols with gold and are of a similar order of magnitude to moderate hydrogen bonding interactions. This has significant implications in the application of phenyldithioesters for the functionalization of AuNPs. The SERS results indicated that the phenyldithioesters interact with AuNPs through the C═S bond, and the molecules do not disassociate upon adsorption to the AuNPs. The SERS spectra are dominated by the portions of the molecule that dominate the charge transfer complex with the AuNPs. The significance of this in relation to the use of phenyldithioesters for molecular barcoding of nanoparticle assemblies is discussed.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

The results of a numerical investigation into the errors for least squares estimates of function gradients are presented. The underlying algorithm is obtained by constructing a least squares problem using a truncated Taylor expansion. An error bound associated with this method contains in its numerator terms related to the Taylor series remainder, while its denominator contains the smallest singular value of the least squares matrix. Perhaps for this reason the error bounds are often found to be pessimistic by several orders of magnitude. The circumstance under which these poor estimates arise is elucidated and an empirical correction of the theoretical error bounds is conjectured and investigated numerically. This is followed by an indication of how the conjecture is supported by a rigorous argument.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

The effectiveness of using thermally activated hydrotalcite materials has been investigated for the removal of arsenate, vanadate, and molybdate in individual and mixed solutions. Results show that increasing the Mg,Al ratio to 4:1 causes an increase in the percentage of anions removed from solution. The order of affinity of the three anions analysed in this investigation is arsenate, vanadate, and molybdate. By comparisons with several synthetic hydrotalcite materials, the hydrotalcite structure in the seawater neutralised red mud (SWN-RM) has been determined to consist of magnesium and aluminium with a ratio between 3.5:1 and 4:1. Thermally activated seawater neutralised red mud removes at least twice the concentration of anionic species than thermally activated red mud alone, due to the formation of 40 to 60 % Bayer hydrotalcite during the neutralisation process.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Pure Tungsten Oxide (WO3) and Iron-doped (10 at%) Tungsten Oxide (WO3:Fe) nanostructured thin films were prepared using a dual crucible Electron Beam Evaporation techniques. The films were deposited at room temperature in high vacuum condition on glass substrate and post-heat treated at 300 oC for 1 hour. From the study of X-ray diffraction and Raman the characteristics of the as-deposited WO3 and WO3:Fe films indicated non-crystalline nature. The surface roughness of all the films showed in the order of 2.5 nm as observed using Atomic Force Microscopy (AFM). X-Ray Photoelectron Spectroscopy (XPS) analysis revealed tungsten oxide films with stoichiometry close to WO3. The addition of Fe to WO3 produced a smaller particle size and lower porosity as observed using Transmission Electron Microscopy (TEM). A slight difference in optical band gap energies of 3.22 eV and 3.12 eV were found between the as-deposited WO3 and WO3:Fe films, respectively. However, the difference in the band gap energies of the annealed films were significantly higher having values of 3.12 eV and 2.61 eV for the WO3 and WO3:Fe films, respectively. The heat treated samples were investigated for gas sensing applications using noise spectroscopy and doping of Fe to WO3 reduced the sensitivity to certain gasses. Detailed study of the WO3 and WO3:Fe films gas sensing properties is the subject of another paper.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

There is much anecdotal evidence and academic argument that the location of a business influences its value. That is, some businesses appear to be worth more than others because of their location. This is particularly so in the tourism industry. Within the domain of the destination literature, many factors can be posited on why business valuation varies, ranging from access to markets, availability of labor, climate, and surrounding services. Given that business value is such a fundamental principle that underpins the viability of the tourist industry through its relationship with pricing, business acquisition, and investment, it is surprising that scant research has sought to quantify the relative premium associated with geographic locations. This study proposes a novel way in which to estimate geographic brand premium. Specifically, the approach translates valuation techniques from financial economics to quantify the incremental value derived from businesses operating in a particular geographic region, and produces a geographic brand premium. The article applies the technique to a well-known tourist destination in Australia, and the results are consistent with a positive value of brand equity in the key industries and are of a plausible order of magnitude. The article carries strong implications for business and tourism operators in terms of valuation, pricing, and investment, but more generally, the approach is potentially useful to local authorities and business associations when deciding how much resource and effort should be devoted to brand protection.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

The radiation chemistry and the grafting of a fluoropolymer, poly(tetrafluoroethylene-coperfluoropropyl vinyl ether) (PFA), was investigated with the aim of developing a highly stable grafted support for use in solid phase organic chemistry (SPOC). A radiation-induced grafting method was used whereby the PFA was exposed to ionizing radiation to form free radicals capable of initiating graft copolymerization of styrene. To fully investigate this process, both the radiation chemistry of PFA and the grafting of styrene to PFA were examined. Radiation alone was found to have a detrimental effect on PFA when irradiated at 303 K. This was evident from the loss in the mechanical properties due to chain scission reactions. This meant that when radiation was used for the grafting reactions, the total radiation dose needed to be kept as low as possible. The radicals produced when PFA was exposed to radiation were examined using electron spin resonance spectroscopy. Both main-chain (–CF2–C.F–CF2-) and end-chain (–CF2–C.F2) radicals were identified. The stability of the majority of the main-chain radicals when the polymer was heated above the glass transition temperature suggested that they were present mainly in the crystalline regions of the polymer, while the end-chain radicals were predominately located in the amorphous regions. The radical yield at 77 K was lower than the radical yield at 303 K suggesting that cage recombination at low temperatures inhibited free radicals from stabilizing. High-speed MAS 19F NMR was used to identify the non-volatile products after irradiation of PFA over a wide temperature range. The major products observed over the irradiation temperature 303 to 633 K included new saturated chain ends, short fluoromethyl side chains in both the amorphous and crystalline regions, and long branch points. The proportion of the radiolytic products shifted from mainly chain scission products at low irradiation temperatures to extensive branching at higher irradiation temperatures. Calculations of G values revealed that net crosslinking only occurred when PFA was irradiated in the melt. Minor products after irradiation at elevated temperatures included internal and terminal double bonds and CF3 groups adjacent to double bonds. The volatile products after irradiation at 303 K included tetrafluoromethane (CF4) and oxygen-containing species from loss of the perfluoropropyl ether side chains of PFA as identified by mass spectrometry and FTIR spectroscopy. The chemical changes induced by radiation exposure were accompanied by changes in the thermal properties of the polymer. Changes in the crystallinity and thermal stability of PFA after irradiation were examined using DSC and TGA techniques. The equilibrium melting temperature of untreated PFA was 599 K as determined using a method of extrapolation of the melting temperatures of imperfectly formed crystals. After low temperature irradiation, radiation- induced crystallization was prevalent due to scission of strained tie molecules, loss of perfluoropropyl ether side chains, and lowering of the molecular weight which promoted chain alignment and hence higher crystallinity. After irradiation at high temperatures, the presence of short and long branches hindered crystallization, lowering the overall crystallinity. The thermal stability of the PFA decreased with increasing radiation dose and temperature due to the introduction of defect groups. Styrene was graft copolymerized to PFA using -radiation as the initiation source with the aim of preparing a graft copolymer suitable as a support for SPOC. Various grafting conditions were studied, such as the total dose, dose rate, solvent effects and addition of nitroxides to create “living” graft chains. The effect of dose rate was examined when grafting styrene vapour to PFA using the simultaneous grafting method. The initial rate of grafting was found to be independent of the dose rate which implied that the reaction was diffusion controlled. When the styrene was dissolved in various solvents for the grafting reaction, the graft yield was strongly dependent of the type and concentration of the solvent used. The greatest graft yield was observed when the solvent swelled the grafted layers and the substrate. Microprobe Raman spectroscopy was used to map the penetration of the graft into the substrate. The grafted layer was found to contain both poly(styrene) (PS) and PFA and became thicker with increasing radiation dose and graft yield which showed that grafting began at the surface and progressively penetrated the substrate as the grafted layer was swollen. The molecular weight of the grafted PS was estimated by measuring the molecular weight of the non-covalently bonded homopolymer formed in the grafted layers using SEC. The molecular weight of the occluded homopolymer was an order of magnitude greater than the free homopolymer formed in the surrounding solution suggesting that the high viscosity in the grafted regions led to long PS grafts. When a nitroxide mediated free radical polymerization was used, grafting occurred within the substrate and not on the surface due to diffusion of styrene into the substrate at the high temperatures needed for the reaction to proceed. Loading tests were used to measure the capacity of the PS graft to be functionialized with aminomethyl groups then further derivatized. These loading tests showed that samples grafted in a solution of styrene and methanol had superior loading capacity over samples graft using other solvents due to the shallow penetration and hence better accessibility of the graft when methanol was used as a solvent.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Productivity is basic statistical information for many international comparisons and country performance assessments. This study estimates the construction labour productivity of 79 selected economies. The real (purchasing power parities converted) and nominal construction expenditure from the Report of 2005 International Comparison Programme published by the World Bank and construction employment from the database of labour statistics (LABORSTA) operated by the Bureau of Statistics of International Labour Organization were used in the estimation. The inference statistics indicate that the descending order of nominal construction labour productivity from high income economies to low income economies is not established. The average construction labour productivity of low income economies is higher than middle income economies when the productivity calculation uses purchasing power parities converted data. Malaysia ranked 50th and 63rd position among the 79 selected economies on real and nominal measurement respectively.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

This paper discusses similarities and differences in autonomous helicopters developed at USC and CSIRO. The most significant differences are in the accuracy and sample rate of the sensor systems used for control. The USC vehicle, like a number of others, makes use of a sensor suite that costs an order of magnitude more than the vehicle. The CSIRO system, by contrast, utilizes low-cost inertial, magnetic, vision and GPS to achieve the same ends. We describe the architecture of both autonomous helicopters, discuss the design issues and present comparative results.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

The performance of an adaptive filter may be studied through the behaviour of the optimal and adaptive coefficients in a given environment. This thesis investigates the performance of finite impulse response adaptive lattice filters for two classes of input signals: (a) frequency modulated signals with polynomial phases of order p in complex Gaussian white noise (as nonstationary signals), and (b) the impulsive autoregressive processes with alpha-stable distributions (as non-Gaussian signals). Initially, an overview is given for linear prediction and adaptive filtering. The convergence and tracking properties of the stochastic gradient algorithms are discussed for stationary and nonstationary input signals. It is explained that the stochastic gradient lattice algorithm has many advantages over the least-mean square algorithm. Some of these advantages are having a modular structure, easy-guaranteed stability, less sensitivity to the eigenvalue spread of the input autocorrelation matrix, and easy quantization of filter coefficients (normally called reflection coefficients). We then characterize the performance of the stochastic gradient lattice algorithm for the frequency modulated signals through the optimal and adaptive lattice reflection coefficients. This is a difficult task due to the nonlinear dependence of the adaptive reflection coefficients on the preceding stages and the input signal. To ease the derivations, we assume that reflection coefficients of each stage are independent of the inputs to that stage. Then the optimal lattice filter is derived for the frequency modulated signals. This is performed by computing the optimal values of residual errors, reflection coefficients, and recovery errors. Next, we show the tracking behaviour of adaptive reflection coefficients for frequency modulated signals. This is carried out by computing the tracking model of these coefficients for the stochastic gradient lattice algorithm in average. The second-order convergence of the adaptive coefficients is investigated by modeling the theoretical asymptotic variance of the gradient noise at each stage. The accuracy of the analytical results is verified by computer simulations. Using the previous analytical results, we show a new property, the polynomial order reducing property of adaptive lattice filters. This property may be used to reduce the order of the polynomial phase of input frequency modulated signals. Considering two examples, we show how this property may be used in processing frequency modulated signals. In the first example, a detection procedure in carried out on a frequency modulated signal with a second-order polynomial phase in complex Gaussian white noise. We showed that using this technique a better probability of detection is obtained for the reduced-order phase signals compared to that of the traditional energy detector. Also, it is empirically shown that the distribution of the gradient noise in the first adaptive reflection coefficients approximates the Gaussian law. In the second example, the instantaneous frequency of the same observed signal is estimated. We show that by using this technique a lower mean square error is achieved for the estimated frequencies at high signal-to-noise ratios in comparison to that of the adaptive line enhancer. The performance of adaptive lattice filters is then investigated for the second type of input signals, i.e., impulsive autoregressive processes with alpha-stable distributions . The concept of alpha-stable distributions is first introduced. We discuss that the stochastic gradient algorithm which performs desirable results for finite variance input signals (like frequency modulated signals in noise) does not perform a fast convergence for infinite variance stable processes (due to using the minimum mean-square error criterion). To deal with such problems, the concept of minimum dispersion criterion, fractional lower order moments, and recently-developed algorithms for stable processes are introduced. We then study the possibility of using the lattice structure for impulsive stable processes. Accordingly, two new algorithms including the least-mean P-norm lattice algorithm and its normalized version are proposed for lattice filters based on the fractional lower order moments. Simulation results show that using the proposed algorithms, faster convergence speeds are achieved for parameters estimation of autoregressive stable processes with low to moderate degrees of impulsiveness in comparison to many other algorithms. Also, we discuss the effect of impulsiveness of stable processes on generating some misalignment between the estimated parameters and the true values. Due to the infinite variance of stable processes, the performance of the proposed algorithms is only investigated using extensive computer simulations.