198 resultados para Anglican orders.
Resumo:
Financial processes may possess long memory and their probability densities may display heavy tails. Many models have been developed to deal with this tail behaviour, which reflects the jumps in the sample paths. On the other hand, the presence of long memory, which contradicts the efficient market hypothesis, is still an issue for further debates. These difficulties present challenges with the problems of memory detection and modelling the co-presence of long memory and heavy tails. This PhD project aims to respond to these challenges. The first part aims to detect memory in a large number of financial time series on stock prices and exchange rates using their scaling properties. Since financial time series often exhibit stochastic trends, a common form of nonstationarity, strong trends in the data can lead to false detection of memory. We will take advantage of a technique known as multifractal detrended fluctuation analysis (MF-DFA) that can systematically eliminate trends of different orders. This method is based on the identification of scaling of the q-th-order moments and is a generalisation of the standard detrended fluctuation analysis (DFA) which uses only the second moment; that is, q = 2. We also consider the rescaled range R/S analysis and the periodogram method to detect memory in financial time series and compare their results with the MF-DFA. An interesting finding is that short memory is detected for stock prices of the American Stock Exchange (AMEX) and long memory is found present in the time series of two exchange rates, namely the French franc and the Deutsche mark. Electricity price series of the five states of Australia are also found to possess long memory. For these electricity price series, heavy tails are also pronounced in their probability densities. The second part of the thesis develops models to represent short-memory and longmemory financial processes as detected in Part I. These models take the form of continuous-time AR(∞) -type equations whose kernel is the Laplace transform of a finite Borel measure. By imposing appropriate conditions on this measure, short memory or long memory in the dynamics of the solution will result. A specific form of the models, which has a good MA(∞) -type representation, is presented for the short memory case. Parameter estimation of this type of models is performed via least squares, and the models are applied to the stock prices in the AMEX, which have been established in Part I to possess short memory. By selecting the kernel in the continuous-time AR(∞) -type equations to have the form of Riemann-Liouville fractional derivative, we obtain a fractional stochastic differential equation driven by Brownian motion. This type of equations is used to represent financial processes with long memory, whose dynamics is described by the fractional derivative in the equation. These models are estimated via quasi-likelihood, namely via a continuoustime version of the Gauss-Whittle method. The models are applied to the exchange rates and the electricity prices of Part I with the aim of confirming their possible long-range dependence established by MF-DFA. The third part of the thesis provides an application of the results established in Parts I and II to characterise and classify financial markets. We will pay attention to the New York Stock Exchange (NYSE), the American Stock Exchange (AMEX), the NASDAQ Stock Exchange (NASDAQ) and the Toronto Stock Exchange (TSX). The parameters from MF-DFA and those of the short-memory AR(∞) -type models will be employed in this classification. We propose the Fisher discriminant algorithm to find a classifier in the two and three-dimensional spaces of data sets and then provide cross-validation to verify discriminant accuracies. This classification is useful for understanding and predicting the behaviour of different processes within the same market. The fourth part of the thesis investigates the heavy-tailed behaviour of financial processes which may also possess long memory. We consider fractional stochastic differential equations driven by stable noise to model financial processes such as electricity prices. The long memory of electricity prices is represented by a fractional derivative, while the stable noise input models their non-Gaussianity via the tails of their probability density. A method using the empirical densities and MF-DFA will be provided to estimate all the parameters of the model and simulate sample paths of the equation. The method is then applied to analyse daily spot prices for five states of Australia. Comparison with the results obtained from the R/S analysis, periodogram method and MF-DFA are provided. The results from fractional SDEs agree with those from MF-DFA, which are based on multifractal scaling, while those from the periodograms, which are based on the second order, seem to underestimate the long memory dynamics of the process. This highlights the need and usefulness of fractal methods in modelling non-Gaussian financial processes with long memory.
Resumo:
National culture is deeply rooted in values, which are learned and acquired when we are young (2007, p. 6), and „embedded deeply in everyday life. (Newman & Nollen, 1996, p. 754). Values have helped to shape us into who we are today. In other words, as we grow older, the cultural values we have learned and adapted to will mould our daily practices. This is reflected in our actions, behaviours, and the ways in which we communicate. Based on the previous assertion, it can be suggested that national culture may also influence organisational culture, as our „behaviour at work is a continuation of behaviour learned earlier. (Hofstede, 1991, p. 4). Cultural influence in an organisation could be evidenced by looking at communication practices: how employees interact with one another as they communicate in their daily practices. Earlier studies in organisational communication see communication as the heart of an organisation in which it serves, and as „the essence of organised activity and the basic process out of which all other functions derive. (Bavelas and Barret, cited in Redding, 1985, p. 7). Hence, understanding how culture influences communication will help with understanding organisational behaviour. This study was conducted to look at how culture values, which are referred to as culture dimensions in this thesis, influenced communication practices in an organisation that was going through a change process. A single case study was held in a Malaysian organisation, to investigate how Malaysian culture dimensions of respect, collectivism, and harmony were evidenced in the communication practices. Data was collected from twelve semi-structured interviews and five observation sessions. Guided by six attributes identified in the literature, (1) acknowledging seniority, knowledge and experience, 2) saving face, 3) showing loyalty to organisation and leaders, 4) demonstrating cohesiveness among members, 5) prioritising group interests over personal interests, and 6) avoiding confrontations of Malaysian culture dimensions, this study found eighteen communication practices performed by employees of the organisation. This research contributes to the previous cultural work, especially in the Malaysian context, in which evidence of Malaysian culture dimensions of respect, collectivism, and harmony were displayed in communication practices: 1) acknowledging the status quo, 2) obeying orders and directions, 3) name dropping, 4) keeping silent, 5) avoiding questioning, 6) having separate conversations, 7) adding, not criticising, 8) sugar coating, 9) instilling a sense of belonging, 10) taking sides, 11) cooperating, 12) sacrificing personal interest, 13) protecting identity, 14) negotiating, 15) saying „yes. instead of „no., 16) giving politically correct answers, 17) apologising, and 18) tolerating errors. Insights from this finding will help us to understand the organisational challenges that rely on communication, such as during organisational change. Therefore, data findings will be relevant to practitioners to understand the impact of culture on communication practices across countries.
Resumo:
In this paper, we consider a time-space fractional diffusion equation of distributed order (TSFDEDO). The TSFDEDO is obtained from the standard advection-dispersion equation by replacing the first-order time derivative by the Caputo fractional derivative of order α∈(0,1], the first-order and second-order space derivatives by the Riesz fractional derivatives of orders β 1∈(0,1) and β 2∈(1,2], respectively. We derive the fundamental solution for the TSFDEDO with an initial condition (TSFDEDO-IC). The fundamental solution can be interpreted as a spatial probability density function evolving in time. We also investigate a discrete random walk model based on an explicit finite difference approximation for the TSFDEDO-IC.
Resumo:
Purpose: All currently considered parametric models used for decomposing videokeratoscopy height data are viewercentered and hence describe what the operator sees rather than what the surface is. The purpose of this study was to ascertain the applicability of an object-centered representation to modeling of corneal surfaces. Methods: A three-dimensional surface decomposition into a series of spherical harmonics is considered and compared with the traditional Zernike polynomial expansion for a range of videokeratoscopic height data. Results: Spherical harmonic decomposition led to significantly better fits to corneal surfaces (in terms of the root mean square error values) than the corresponding Zernike polynomial expansions with the same number of coefficients, for all considered corneal surfaces, corneal diameters, and model orders. Conclusions: Spherical harmonic decomposition is a viable alternative to Zernike polynomial decomposition. It achieves better fits to videokeratoscopic height data and has the advantage of an object-centered representation that could be particularly suited to the analysis of multiple corneal measurements.
Resumo:
Purpose: To ascertain the effectiveness of object-centered three-dimensional representations for the modeling of corneal surfaces. Methods: Three-dimensional (3D) surface decomposition into series of basis functions including: (i) spherical harmonics, (ii) hemispherical harmonics, and (iii) 3D Zernike polynomials were considered and compared to the traditional viewer-centered representation of two-dimensional (2D) Zernike polynomial expansion for a range of retrospective videokeratoscopic height data from three clinical groups. The data were collected using the Medmont E300 videokeratoscope. The groups included 10 normal corneas with corneal astigmatism less than −0.75 D, 10 astigmatic corneas with corneal astigmatism between −1.07 D and 3.34 D (Mean = −1.83 D, SD = ±0.75 D), and 10 keratoconic corneas. Only data from the right eyes of the subjects were considered. Results: All object-centered decompositions led to significantly better fits to corneal surfaces (in terms of the RMS error values) than the corresponding 2D Zernike polynomial expansions with the same number of coefficients, for all considered corneal surfaces, corneal diameters (2, 4, 6, and 8 mm), and model orders (4th to 10th radial orders) The best results (smallest RMS fit error) were obtained with spherical harmonics decomposition which lead to about 22% reduction in the RMS fit error, as compared to the traditional 2D Zernike polynomials. Hemispherical harmonics and the 3D Zernike polynomials reduced the RMS fit error by about 15% and 12%, respectively. Larger reduction in RMS fit error was achieved for smaller corneral diameters and lower order fits. Conclusions: Object-centered 3D decompositions provide viable alternatives to traditional viewer-centered 2D Zernike polynomial expansion of a corneal surface. They achieve better fits to videokeratoscopic height data and could be particularly suited to the analysis of multiple corneal measurements, where there can be slight variations in the position of the cornea from one map acquisition to the next.
Resumo:
A configurable process model describes a family of similar process models in a given domain. Such a model can be configured to obtain a specific process model that is subsequently used to handle individual cases, for instance, to process customer orders. Process configuration is notoriously difficult as there may be all kinds of interdependencies between configuration decisions.} In fact, an incorrect configuration may lead to behavioral issues such as deadlocks and livelocks. To address this problem, we present a novel verification approach inspired by the ``operating guidelines'' used for partner synthesis. We view the configuration process as an external service, and compute a characterization of all such services which meet particular requirements using the notion of configuration guideline. As a result, we can characterize all feasible configurations (i.\,e., configurations without behavioral problems) at design time, instead of repeatedly checking each individual configuration while configuring a process model.
Resumo:
Principal Topic: ''In less than ten years music labels will not exist anymore.'' Michael Smelli, former Global COO Sony/BMG MCA/QUT IMP Business Lab Digital Music Think Thanks 9 May 2009, Brisbane Big music labels such as EMI, Sony BMG and UMG have been responsible for promoting and producing a myriad of stars in the music industry over the last decades. However, the industry structure is under enormous threat with the emergence of a new innovative era of digital music. Recent years have seen a dramatic shift in industry power with the emergence of Napster and other file sharing sites, iTunes and other online stores, iPod and the MP3 revolution. Myspace.com and other social networking sites are connecting entrepreneurial artists with fans and creating online music communities independent of music labels. In 2008 the digital music business internationally grew by around 25% to 3.7 Billion US-Dollar. Digital platforms now account for around 20% of recorded music sales, up from 15 % in 2007 (IFPI Digital music report 2009). CD sales have fallen by 40% since their peak levels. Global digital music sales totalled an estimated US$ 3 Billion in 2007, an increase of 40% on 2006 figures. Digital sales account for an estimated 15% of global market, up from 11% in 2006 and zero in 2003. The music industry is more advanced in terms of digital revenues than any other creative or entertainment industry (except games). Its digital share is more than twice that of newspapers (7%), films (35) or books (2%). All these shifts present new possibilities for music entrepreneurs to act entrepreneurially and promote their music independently of the major music labels. Diffusion of innovations has a long tradition in both sociology (e.g. Rogers 1962, 2003) and marketing (Bass 1969, Mahajan et al., 1990). The context of the current project is theoretically interesting in two respects. First, the role of online social networks replaces traditional face-to-face word of mouth communications. Second, as music is a hedonistic product, this strongly influences the nature of interpersonal communications and their diffusion patterns. Both of these have received very little attention in the diffusion literature to date, and no studies have investigated the influence of both simultaneously. This research project is concerned with the role of social networks in this new music industry landscape, and how this may be leveraged by musicians willing to act entrepreneurially. Our key research question we intend to address is: How do online social network communities impact the nature, pattern and speed that music diffuses? Methodology/Key Propositions : We expect the nature/ character of diffusion of popular, generic music genres to be different from specialized, niche music. To date, only Moe & Fader (2002) and Lee et al. (2003) investigated diffusion patterns of music and these focus on forecast weekly sales of music CDs based on the advance purchase orders before the launch, rather than taking a detailed look at diffusion patterns. Consequently, our first research questions are concerned with understanding the nature of online communications within the context of diffusion of music and artists. Hence, we have the following research questions: RQ1: What is the nature of fan-to-fan ''word of mouth'' online communications for music? Do these vary by type of artist and genre of music? RQ2: What is the nature of artist-to-fan online communications for music? Do these vary by type of artist and genre of music? What types of communication are effective? Two outcomes from research social network theory are particularly relevant to understanding how music might diffuse through social networks. Weak tie theory (Granovetter, 1973), argues that casual or infrequent contacts within a social network (or weak ties) act as a link to unique information which is not normally contained within an entrepreneurs inner circle (or strong tie) social network. A related argument, structural hole theory (Burt, 1992), posits that it is the absence of direct links (or structural holes) between members of a social network which offers similar informational benefits. Although these two theories argue for the information benefits of casual linkages, and diversity within a social network, others acknowledge that a balanced network which consists of a mix of strong ties, weak ties is perhaps more important overall (Uzzi, 1996). It is anticipated that the network structure of the fan base for different types of artists and genres of music will vary considerably. This leads to our third research question: RQ3: How does the network structure of online social network communities impact the pattern and speed that music diffuses? The current paper is best described as theory elaboration. It will report the first exploratory phase designed to develop and elaborate relevant theory (the second phase will be a quantitative study of network structure and diffusion). We intend to develop specific research propositions or hypotheses from the above research questions. To do so we will conduct three focus group discussions of independent musicians and three focus group discussions of fans active in online music communication on social network sites. We will also conduct five case studies of bands that have successfully built fan bases through social networking sites (e.g. myspace.com, facebook.com). The idea is to identify which communication channels they employ and the characteristics of the fan interactions for different genres of music. We intend to conduct interviews with each of the artists and analyse their online interaction with their fans. Results and Implications : At the current stage, we have just begun to conduct focus group discussions. An analysis of the themes from these focus groups will enable us to further refine our research questions into testable hypotheses. Ultimately, our research will provide a better understanding of how social networks promote the diffusion of music, and how this varies for different genres of music. Hence, some music entrepreneurs will be able to promote their music more effectively. The results may be further generalised to other industries where online peer-to-peer communication is common, such as other forms of entertainment and consumer technologies.
Resumo:
Current-voltage (I-V) curves of Poly(3-hexyl-thiophene) (P3HT) diodes have been collected to investigate the polymer hole-dominated charge transport. At room temperature and at low electric fields the I-V characteristic is purely Ohmic whereas at medium-high electric fields, experimental data shows that the hole transport is Trap Dominated - Space Charge Limited Current (TD-SCLC). In this regime, it is possible to extract the I-V characteristic of the P3HT/Al junction showing the ideal Schottky diode behaviour over five orders of magnitude. At high-applied electric fields, holes’ transport is found to be in the trap free SCLC regime. We have measured and modelled in this regime the holes’ mobility to evaluate its dependence from the electric field applied and the temperature of the device.
Resumo:
In 2009, Religious Education is a designated key learning area in Catholic schools in the Archdiocese of Brisbane and, indeed, across Australia. Over the years, though, different conceptualisations of the nature and purpose of religious education have led to the construction of different approaches to the classroom teaching of religion. By investigating the development of religious education policy in the Archdiocese of Brisbane from 1984 to 2003, the study seeks to trace the emergence of new discourses on religious education. The study understands religious education to refer to a lifelong process that occurs through a variety of forms (Moran, 1989). In Catholic schools, it refers both to co-curricula activities, such as retreats and school liturgies, and the classroom teaching of religion. It is the policy framework for the classroom teaching of religion that this study explores. The research was undertaken using a policy case study approach to gain a detailed understanding of how new conceptualisations of religious education emerged at a particular site of policy production, in this case, the Archdiocese of Brisbane. The study draws upon Yeatman’s (1998) description of policy as occurring “when social actors think about what they are doing and why in relation to different and alternative possible futures” (p. 19) and views policy as consisting of more than texts themselves. Policy texts result from struggles over meaning (Taylor, 2004) in which specific discourses are mobilised to support particular views. The study has a particular interest in the analysis of Brisbane religious education policy texts, the discursive practices that surrounded them, and the contexts in which they arose. Policy texts are conceptualised in the study as representing “temporary settlements” (Gale, 1999). Such settlements are asymmetrical, temporary and dependent on context: asymmetrical in that dominant actors are favoured; temporary because dominant actors are always under challenge by other actors in the policy arena; and context - dependent because new situations require new settlements. To investigate the official policy documents, the study used Critical Discourse Analysis (hereafter referred to as CDA) as a research tool that affords the opportunity for researchers to map and chart the emergence of new discourses within the policy arena. As developed by Fairclough (2001), CDA is a three-dimensional application of critical analysis to language. In the Brisbane religious education arena, policy texts formed a genre chain (Fairclough, 2004; Taylor, 2004) which was a focus of the study. There are two features of texts that form genre chains: texts are systematically linked to one another; and, systematic relations of recontextualisation exist between the texts. Fairclough’s (2005) concepts of “imaginary space” and “frameworks for action” (p. 65) within the policy arena were applied to the Brisbane policy arena to investigate the relationship between policy statements and subsequent guidelines documents. Five key findings emerged from the study. First, application of CDA to policy documents revealed that a fundamental reconceptualisation of the nature and purpose of classroom religious education in Catholic schools occurred in the Brisbane policy arena over the last twenty-five years. Second, a disjuncture existed between catechetical discourses that continued to shape religious education policy statements, and educational discourses that increasingly shaped guidelines documents. Third, recontextualisation between policy documents was evident and dependent on the particular context in which religious education occurred. Fourth, at subsequent links in the chain, actors created their own “imaginary space”, thereby altering orders of discourse within the policy arena, with different actors being either foregrounded or marginalised. Fifth, intertextuality was more evident in the later links in the genre chain (i.e. 1994 policy statement and 1997 guidelines document) than in earlier documents. On the basis of the findings of the study, six recommendations are made. First, the institutional Church should carefully consider the contribution that the Catholic school can make to the overall pastoral mission of the diocese in twenty-first century Australia. Second, policymakers should articulate a nuanced understanding of the relationship between catechesis and education with regard to the religion classroom. Third, there should be greater awareness of the connections among policies relating to Catholic schools – especially the connection between enrolment policy and religious education policy. Fourth, there should be greater consistency between policy documents. Fifth, policy documents should be helpful for those to whom they are directed (i.e. Catholic schools, teachers). Sixth, “imaginary space” (Fairclough, 2005) in policy documents needs to be constructed in a way that allows for multiple “frameworks for action” (Fairclough, 2005) through recontextualisation. The findings of this study are significant in a number of ways. For religious educators, the study highlights the need to develop a shared understanding of the nature and purpose of classroom religious education. It argues that this understanding must take into account the multifaith nature of Australian society and the changing social composition of Catholic schools themselves. Greater recognition should be given to the contribution that religious studies courses such as Study of Religion make to the overall religious development of a person. In view of the social composition of Catholic schools, there is also an issue of ecclesiological significance concerning the conceptualisation of the relationship between the institutional Catholic Church and Catholic schools. Finally, the study is of significance because of its application of CDA to religious education policy documents. Use of CDA reveals the foregrounding, marginalising, or excluding of various actors in the policy arena.
Resumo:
Background: Most Australians die in institutions and there is evidence to suggest that the care of these patients is not always optimal. Care pathways for the dying have been designed to transfer benchmarked hospice care to other settings (e.g. acute hospitals and residential age-care facilities) by defining goals of best care, providing guidelines to provide that care and documenting outcome. Method: A retrospective audit was undertaken across a network of health-care institutions in Queensland. The 18 goals considered essential for the care of the dying within the Liverpool Care Pathway were taken as a benchmark. Documentation of achievement of each of these goals was sought. Results: The notes of 160 patients who had died in eight institutions (four hospitals, three hospices, one nursing home) were reviewed. Several areas for improvement were identified, particularly in those goals relating to communication, resuscitation orders and care after death. Few units documented the provision of written information to families. Most patients were prescribed medications in anticipation of pain and agitation but less were prescribed drugs for other common symptoms in the dying. Most of the goals were achieved in a higher percentage of cases in hospice units. Marked differences in practice were noted between different institutions. Conclusion: The audit identified several aspects in the care of the terminally ill that could be improved. End-stage pathways may provide a model for improving the care of patients dying in hospitals and institutions in Australia.
Resumo:
Now in its sixth edition, the Traffic Engineering Handbook continues to be a must have publication in the transportation industry, as it has been for the past 60 years. The new edition provides updated information for people entering the practice and for those already practicing. The handbook is a convenient desk reference, as well as an all in one source of principles and proven techniques in traffic engineering. Most chapters are presented in a new format, which divides the chapters into four areas-basics, current practice, emerging trends and information sources. Chapter topics include road users, vehicle characteristics, statistics, planning for operations, communications, safety, regulations, traffic calming, access management, geometrics, signs and markings, signals, parking, traffic demand, maintenance and studies. In addition, as the focus in transportation has shifted from project based to operations based, two new chapters have been added-"Planning for Operations" and "Managing Traffic Demand to Address Congestion: Providing Travelers with Choices." The Traffic Engineering Handbook continues to be one of the primary reference sources for study to become a certified Professional Traffic Operations Engineer™. Chapters are authored by notable and experienced authors, and reviewed and edited by a distinguished panel of traffic engineering experts.
Resumo:
Seven endemic governance problems are shown to be currently present in governments around the globe and at any level of government as well (for example municipal, federal). These problems have their roots traced back through more than two thousand years of political, specifically ‘democratic’, history. The evidence shows that accountability, transparency, corruption, representation, campaigning methods, constitutionalism and long-term goals were problematic for the ancient Athenians as well as modern international democratisation efforts encompassing every major global region. Why then, given the extended time period humans have had to deal with these problems, are they still present? At least part of the answer to this question is that philosophers, academics and NGOs as well as MNOs have only approached these endemic problems in a piecemeal manner with a skewed perspective on democracy. Their works have also been subject to the ebbs and flows of human history which essentially started and stopped periods of thinking. In order to approach the investigation of endemic problems in relation to democracy (as the overall quest of this thesis was to generate prescriptive results for the improvement of democratic government), it was necessary to delineate what exactly is being written about when using the term ‘democracy’. It is common knowledge that democracy has no one specific definition or practice, even though scholars and philosophers have been attempting to create a definition for generations. What is currently evident, is that scholars are not approaching democracy in an overly simplified manner (that is, it is government for the people, by the people) but, rather, are seeking the commonalities that democracies share, in other words, those items which are common to all things democratic. Following that specific line of investigation, the major practiced and theoretical versions of democracy were thematically analysed. After that, their themes were collapsed into larger categories, at which point the larger categories were comparatively analysed with the practiced and theoretical versions of democracy. Four democratic ‘particles’ (selecting officials, law, equality and communication) were seen to be present in all practiced and theoretical democratic styles. The democratic particles fused with a unique investigative perspective and in-depth political study created a solid conceptualisation of democracy. As such, it is argued that democracy is an ever-present element of any state government, ‘democratic’ or not, and the particles are the bodies which comprise the democratic element. Frequency- and proximity-based analyses showed that democratic particles are related to endemic problems in international democratisation discourse. The linkages between democratic particles and endemic problems were also evident during the thematic analysis as well historical review. This ultimately led to the viewpoint that if endemic problems are mitigated the act may improve democratic particles which might strengthen the element of democracy in the governing apparatus of any state. Such may actively minimise or wholly displace inefficient forms of government, leading to a government specifically tailored to the population it orders. Once the theoretical and empirical goals were attained, this thesis provided some prescriptive measures which government, civil society, academics, professionals and/or active citizens can use to mitigate endemic problems (in any country and at any level of government) so as to improve the human condition via better democratic government.
Resumo:
The traditional searching method for model-order selection in linear regression is a nested full-parameters-set searching procedure over the desired orders, which we call full-model order selection. On the other hand, a method for model-selection searches for the best sub-model within each order. In this paper, we propose using the model-selection searching method for model-order selection, which we call partial-model order selection. We show by simulations that the proposed searching method gives better accuracies than the traditional one, especially for low signal-to-noise ratios over a wide range of model-order selection criteria (both information theoretic based and bootstrap-based). Also, we show that for some models the performance of the bootstrap-based criterion improves significantly by using the proposed partial-model selection searching method. Index Terms— Model order estimation, model selection, information theoretic criteria, bootstrap 1. INTRODUCTION Several model-order selection criteria can be applied to find the optimal order. Some of the more commonly used information theoretic-based procedures include Akaike’s information criterion (AIC) [1], corrected Akaike (AICc) [2], minimum description length (MDL) [3], normalized maximum likelihood (NML) [4], Hannan-Quinn criterion (HQC) [5], conditional model-order estimation (CME) [6], and the efficient detection criterion (EDC) [7]. From a practical point of view, it is difficult to decide which model order selection criterion to use. Many of them perform reasonably well when the signal-to-noise ratio (SNR) is high. The discrepancies in their performance, however, become more evident when the SNR is low. In those situations, the performance of the given technique is not only determined by the model structure (say a polynomial trend versus a Fourier series) but, more importantly, by the relative values of the parameters within the model. This makes the comparison between the model-order selection algorithms difficult as within the same model with a given order one could find an example for which one of the methods performs favourably well or fails [6, 8]. Our aim is to improve the performance of the model order selection criteria in cases where the SNR is low by considering a model-selection searching procedure that takes into account not only the full-model order search but also a partial model order search within the given model order. Understandably, the improvement in the performance of the model order estimation is at the expense of additional computational complexity.
Resumo:
All Australian governments recognize the need to ensure that land and natural resources are used sustainably. In this context, ‘resources’ includes natural resources found on land such as trees and other vegetation, fauna, soil and minerals, and cultural resources found on land such as archaeological sites and artefacts. Regulators use a wide range of techniques to promote sustainability. To achieve their objectives, they may, for example, create economic incentives through bounties, grants and subsidies, encourage the development of self-regulatory codes, or enter into agreements with landowners specifying how the land is to be managed. A common way of regulating is by making administrative orders, determinations or decisions under powers given to regulators by Acts of Parliament (statutes) or by regulations (delegated legislation). Generally the legislation provides for specified rights or duties, and authorises a regulator to make an order or decision to apply the legislative provisions to particular land or cases. For example, legislation might empower a regulator to make an order that requires the owner of a contaminated site to remediate it. When the regulator exercises the power by making an order in relation to particular land, the owner is placed under a statutory duty to remediate. When regulators exercise their statutory powers to manage the use of private land or natural or cultural resources on private land, property law issues can arise. The owner of land has a private property right that the law will enforce against anybody else who interferes with the enjoyment of the right, without legal authority to do so. The law dealing with the enforcement of private property rights forms part of private law. This report focuses on the relationship between the law of private property and the regulation of land and resources by legislation and by administrative decisions made under powers given by legislation (statutory powers).
Resumo:
The enforcement of Intellectual Property rights poses one of the greatest current threats to the privacy of individuals online. Recent trends have shown that the balance between privacy and intellectual property enforcement has been shifted in favour of intellectual property owners. This article discusses the ways in which the scope of preliminary discovery and Anton Piller orders have been overly expanded in actions where large amounts of electronic information is available, especially against online intermediaries (service providers and content hosts). The victim in these cases is usually the end user whose privacy has been infringed without a right of reply and sometimes without notice. This article proposes some ways in which the delicate balance can be restored, and considers some safeguards for user privacy. These safeguards include restructuring the threshold tests for discovery, limiting the scope of information disclosed, distinguishing identity discovery from information discovery, and distinguishing information preservation from preliminary discovery.