939 resultados para PART I


Relevância:

60.00% 60.00%

Publicador:

Resumo:

This reader in popular cultural studies meets the need for an up-to-date collection of readings on contemporary youth cultures and youth music. Table of Content: Introduction: Reading Pop(ular) Cult(ural) Stud(ie)s: Steve Redhead. Part I: Theory I:. 1. Pearls and Swine: Intellectuals and the Mass Media: Simon Frith and Jon Savage. 2. Over-the-Counter Culture: Retheorising Resistance in Popular Culture: Beverly Best. Part II: Commentaries. 3. Organised Disorder: The Changing Space of the Record Shop: Will Straw. 4. Spatial Politics: A Gendered Sense of Place: Cressida Miles. 5. Let's All Have a Disco? Football, Popular Music and Democratisation: Adam Brown. 6. Rave Culture: Living Dream or Living Death?: Simon Reynolds. 7. Fear and Lothing in Wisconsin: Sarah Champion. 8. The House Sound of Chicago: Hillegonda Rietveld. 9. Cocaine Girls: Marek Kohn. 10. In the Supermarket of Style: Ted Polhemus. 11. Love Factory: The Sites, Practices and Media Relationships of Northern Soul: Kate Milestone. 12. DJ Culture: Dave Haslam. Plates: Patrick Henry. Part III: Theory II: . 13. The Post-Subculturalist: David Muggleton. 14. Reading Pop: The Press, the Scholar and the Consequences of Popular Cultural Studies: Steve Jones. 15. Re-placing Popular Culture: Lawrence Grossberg. Index.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Financial processes may possess long memory and their probability densities may display heavy tails. Many models have been developed to deal with this tail behaviour, which reflects the jumps in the sample paths. On the other hand, the presence of long memory, which contradicts the efficient market hypothesis, is still an issue for further debates. These difficulties present challenges with the problems of memory detection and modelling the co-presence of long memory and heavy tails. This PhD project aims to respond to these challenges. The first part aims to detect memory in a large number of financial time series on stock prices and exchange rates using their scaling properties. Since financial time series often exhibit stochastic trends, a common form of nonstationarity, strong trends in the data can lead to false detection of memory. We will take advantage of a technique known as multifractal detrended fluctuation analysis (MF-DFA) that can systematically eliminate trends of different orders. This method is based on the identification of scaling of the q-th-order moments and is a generalisation of the standard detrended fluctuation analysis (DFA) which uses only the second moment; that is, q = 2. We also consider the rescaled range R/S analysis and the periodogram method to detect memory in financial time series and compare their results with the MF-DFA. An interesting finding is that short memory is detected for stock prices of the American Stock Exchange (AMEX) and long memory is found present in the time series of two exchange rates, namely the French franc and the Deutsche mark. Electricity price series of the five states of Australia are also found to possess long memory. For these electricity price series, heavy tails are also pronounced in their probability densities. The second part of the thesis develops models to represent short-memory and longmemory financial processes as detected in Part I. These models take the form of continuous-time AR(∞) -type equations whose kernel is the Laplace transform of a finite Borel measure. By imposing appropriate conditions on this measure, short memory or long memory in the dynamics of the solution will result. A specific form of the models, which has a good MA(∞) -type representation, is presented for the short memory case. Parameter estimation of this type of models is performed via least squares, and the models are applied to the stock prices in the AMEX, which have been established in Part I to possess short memory. By selecting the kernel in the continuous-time AR(∞) -type equations to have the form of Riemann-Liouville fractional derivative, we obtain a fractional stochastic differential equation driven by Brownian motion. This type of equations is used to represent financial processes with long memory, whose dynamics is described by the fractional derivative in the equation. These models are estimated via quasi-likelihood, namely via a continuoustime version of the Gauss-Whittle method. The models are applied to the exchange rates and the electricity prices of Part I with the aim of confirming their possible long-range dependence established by MF-DFA. The third part of the thesis provides an application of the results established in Parts I and II to characterise and classify financial markets. We will pay attention to the New York Stock Exchange (NYSE), the American Stock Exchange (AMEX), the NASDAQ Stock Exchange (NASDAQ) and the Toronto Stock Exchange (TSX). The parameters from MF-DFA and those of the short-memory AR(∞) -type models will be employed in this classification. We propose the Fisher discriminant algorithm to find a classifier in the two and three-dimensional spaces of data sets and then provide cross-validation to verify discriminant accuracies. This classification is useful for understanding and predicting the behaviour of different processes within the same market. The fourth part of the thesis investigates the heavy-tailed behaviour of financial processes which may also possess long memory. We consider fractional stochastic differential equations driven by stable noise to model financial processes such as electricity prices. The long memory of electricity prices is represented by a fractional derivative, while the stable noise input models their non-Gaussianity via the tails of their probability density. A method using the empirical densities and MF-DFA will be provided to estimate all the parameters of the model and simulate sample paths of the equation. The method is then applied to analyse daily spot prices for five states of Australia. Comparison with the results obtained from the R/S analysis, periodogram method and MF-DFA are provided. The results from fractional SDEs agree with those from MF-DFA, which are based on multifractal scaling, while those from the periodograms, which are based on the second order, seem to underestimate the long memory dynamics of the process. This highlights the need and usefulness of fractal methods in modelling non-Gaussian financial processes with long memory.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

In this article we survey relevant international literature on the issue of parental liability and responsibility for the crimes of young offenders. In addition, as a starting point for needed cross-jurisdictional research, we focus on different approaches that have been taken to making parents responsible for youth crime in Australia and Canada. This comparative analysis of Australian and Canadian legislative and policy approaches is situated within a broader discussion of arguments about parental responsibility, the ‘punitive turn’ in youth justice, and cross-jurisdictional criminal justice policy transfer and convergence. One unexpected finding of our literature survey is the relatively sparse attention given to the issue of parental responsibility for youth crime in legal and criminological literature compared to the attention it receives in the media and popular-public culture. In Part I we examine the different views that have been articulated in the social science literature for and against parental responsibility laws, along with arguments that have been made about why such laws have been enacted in an increasing number of Western countries in recent years. In Part II, we situate our comparative study of Australian and Canadian legislative and policy approaches within a broader discussion of arguments about the ‘punitive turn’ in youth justice, responsibilisation, and cross-jurisdictional criminal justice policy transfer and convergence. In Part III, we identify and examine the scope of different parental responsibility laws that have been enacted in Australia and Canada; noting significant differences in the manner and extent to which parental responsibility laws and policies have been invoked as part of the solution to dealing with youth crime. In our concluding discussion, in Part IV, we try to speculate on some of the reasons for these differences and set an agenda for needed future research on the topic.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

The concept of "fair basing" is widely acknowledged as a difficult area of patent law. This article maps the development of fair basing law to demonstrate how some of the difficulties have arisen. Part I of the article traces the development of the branches of patent law that were swept under the nomenclature of "fair basing" by British legislation in 1949. It looks at the early courts' approach to patent construction, examines the early origin of fair basing and what it was intended to achiever. Part II of the article considers the modern interpretation of fair basing, which provides a striking contrast to its historical context. Without any consistent judicial approach to construction the doctrine has developed inappropriately, giving rise to both over-strict and over-generous approaches.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Part I of this book covers the commercial and contractual background to technology licensing agreements. Part II discusses the European Community's new regime on the application and enforcement of Article 81 to technology licensing agreements. EC Council Regulation 1/2003 replaced the Council Regulation 17/1962 and repealed the system under which restrictive agreements and practices could be notified to the EC Commission. A new Commission regulation on technology transfer agreements, Regulation 772/2004. These two enactments required consequential amendments to the chapters in Part III where the usual terms of technology licensing agreements are analysed and exemplified by reference to decided cases.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

This thesis deals with the problem of the instantaneous frequency (IF) estimation of sinusoidal signals. This topic plays significant role in signal processing and communications. Depending on the type of the signal, two major approaches are considered. For IF estimation of single-tone or digitally-modulated sinusoidal signals (like frequency shift keying signals) the approach of digital phase-locked loops (DPLLs) is considered, and this is Part-I of this thesis. For FM signals the approach of time-frequency analysis is considered, and this is Part-II of the thesis. In part-I we have utilized sinusoidal DPLLs with non-uniform sampling scheme as this type is widely used in communication systems. The digital tanlock loop (DTL) has introduced significant advantages over other existing DPLLs. In the last 10 years many efforts have been made to improve DTL performance. However, this loop and all of its modifications utilizes Hilbert transformer (HT) to produce a signal-independent 90-degree phase-shifted version of the input signal. Hilbert transformer can be realized approximately using a finite impulse response (FIR) digital filter. This realization introduces further complexity in the loop in addition to approximations and frequency limitations on the input signal. We have tried to avoid practical difficulties associated with the conventional tanlock scheme while keeping its advantages. A time-delay is utilized in the tanlock scheme of DTL to produce a signal-dependent phase shift. This gave rise to the time-delay digital tanlock loop (TDTL). Fixed point theorems are used to analyze the behavior of the new loop. As such TDTL combines the two major approaches in DPLLs: the non-linear approach of sinusoidal DPLL based on fixed point analysis, and the linear tanlock approach based on the arctan phase detection. TDTL preserves the main advantages of the DTL despite its reduced structure. An application of TDTL in FSK demodulation is also considered. This idea of replacing HT by a time-delay may be of interest in other signal processing systems. Hence we have analyzed and compared the behaviors of the HT and the time-delay in the presence of additive Gaussian noise. Based on the above analysis, the behavior of the first and second-order TDTLs has been analyzed in additive Gaussian noise. Since DPLLs need time for locking, they are normally not efficient in tracking the continuously changing frequencies of non-stationary signals, i.e. signals with time-varying spectra. Nonstationary signals are of importance in synthetic and real life applications. An example is the frequency-modulated (FM) signals widely used in communication systems. Part-II of this thesis is dedicated for the IF estimation of non-stationary signals. For such signals the classical spectral techniques break down, due to the time-varying nature of their spectra, and more advanced techniques should be utilized. For the purpose of instantaneous frequency estimation of non-stationary signals there are two major approaches: parametric and non-parametric. We chose the non-parametric approach which is based on time-frequency analysis. This approach is computationally less expensive and more effective in dealing with multicomponent signals, which are the main aim of this part of the thesis. A time-frequency distribution (TFD) of a signal is a two-dimensional transformation of the signal to the time-frequency domain. Multicomponent signals can be identified by multiple energy peaks in the time-frequency domain. Many real life and synthetic signals are of multicomponent nature and there is little in the literature concerning IF estimation of such signals. This is why we have concentrated on multicomponent signals in Part-H. An adaptive algorithm for IF estimation using the quadratic time-frequency distributions has been analyzed. A class of time-frequency distributions that are more suitable for this purpose has been proposed. The kernels of this class are time-only or one-dimensional, rather than the time-lag (two-dimensional) kernels. Hence this class has been named as the T -class. If the parameters of these TFDs are properly chosen, they are more efficient than the existing fixed-kernel TFDs in terms of resolution (energy concentration around the IF) and artifacts reduction. The T-distributions has been used in the IF adaptive algorithm and proved to be efficient in tracking rapidly changing frequencies. They also enables direct amplitude estimation for the components of a multicomponent

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Signal Processing (SP) is a subject of central importance in engineering and the applied sciences. Signals are information-bearing functions, and SP deals with the analysis and processing of signals (by dedicated systems) to extract or modify information. Signal processing is necessary because signals normally contain information that is not readily usable or understandable, or which might be disturbed by unwanted sources such as noise. Although many signals are non-electrical, it is common to convert them into electrical signals for processing. Most natural signals (such as acoustic and biomedical signals) are continuous functions of time, with these signals being referred to as analog signals. Prior to the onset of digital computers, Analog Signal Processing (ASP) and analog systems were the only tool to deal with analog signals. Although ASP and analog systems are still widely used, Digital Signal Processing (DSP) and digital systems are attracting more attention, due in large part to the significant advantages of digital systems over the analog counterparts. These advantages include superiority in performance,s peed, reliability, efficiency of storage, size and cost. In addition, DSP can solve problems that cannot be solved using ASP, like the spectral analysis of multicomonent signals, adaptive filtering, and operations at very low frequencies. Following the recent developments in engineering which occurred in the 1980's and 1990's, DSP became one of the world's fastest growing industries. Since that time DSP has not only impacted on traditional areas of electrical engineering, but has had far reaching effects on other domains that deal with information such as economics, meteorology, seismology, bioengineering, oceanology, communications, astronomy, radar engineering, control engineering and various other applications. This book is based on the Lecture Notes of Associate Professor Zahir M. Hussain at RMIT University (Melbourne, 2001-2009), the research of Dr. Amin Z. Sadik (at QUT & RMIT, 2005-2008), and the Note of Professor Peter O'Shea at Queensland University of Technology. Part I of the book addresses the representation of analog and digital signals and systems in the time domain and in the frequency domain. The core topics covered are convolution, transforms (Fourier, Laplace, Z. Discrete-time Fourier, and Discrete Fourier), filters, and random signal analysis. There is also a treatment of some important applications of DSP, including signal detection in noise, radar range estimation, banking and financial applications, and audio effects production. Design and implementation of digital systems (such as integrators, differentiators, resonators and oscillators are also considered, along with the design of conventional digital filters. Part I is suitable for an elementary course in DSP. Part II (which is suitable for an advanced signal processing course), considers selected signal processing systems and techniques. Core topics covered are the Hilbert transformer, binary signal transmission, phase-locked loops, sigma-delta modulation, noise shaping, quantization, adaptive filters, and non-stationary signal analysis. Part III presents some selected advanced DSP topics. We hope that this book will contribute to the advancement of engineering education and that it will serve as a general reference book on digital signal processing.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Process mining techniques are able to extract knowledge from event logs commonly available in today’s information systems. These techniques provide new means to discover, monitor, and improve processes in a variety of application domains. There are two main drivers for the growing interest in process mining. On the one hand, more and more events are being recorded, thus, providing detailed information about the history of processes. On the other hand, there is a need to improve and support business processes in competitive and rapidly changing environments. This manifesto is created by the IEEE Task Force on Process Mining and aims to promote the topic of process mining. Moreover, by defining a set of guiding principles and listing important challenges, this manifesto hopes to serve as a guide for software developers, scientists, consultants, business managers, and end-users. The goal is to increase the maturity of process mining as a new tool to improve the (re)design, control, and support of operational business processes.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

This paper explores how the design of creative clusters as a key strategy in promoting the urban creative economy has played out in Shanghai. Creative Clusters in Europe and North America context have emerged ‘organically’. They developed spontaneously in those cities which went through a period of post-industrial decline. Creative Industries grew up in these cities as part of a new urban economy in the wake of old manufacturing industries. Artists and creative entrepreneurs moved into vacant warehouses and factories and began the trend of ‘creative clusters’. Such clusters facilitate the transfer of tacit knowledge through informal learning, the efficient sourcing of skills and information, competition, collaboration and learning, inter-cluster trading and networking. This new urban phenomenon was soon targeted by local economic development policy in charge of re-generating and re-structuralizing industrial activities in cities. Rising interest from real estate and local economic development has led to more and more planned creative clusters. In the aim of catching up with the world’s creative cities, Shanghai has planned over 100 creative clusters since 2005. Along with these officially designed creative clusters, there are organically emerged creative clusters that are much smaller in scale and much more informal in terms of the management. And they emerged originally in old residential areas just outside the CBD and expand to include French concession the most sort after residential area at the edge of CBD. More recently, office buildings within CBD are made available for creative usages. From fringe to CBD, these organic creative clusters provide crucial evidences for the design of creative clusters. This paper will be organized in 2 parts. In the first part, I will present a case study of 8 ‘official’ clusters (title granted by local govenrment) in Shanghai through which I am hoping to develop some key indicators of the success/failure of creative clusters as well as link them with their physical, social and operational efficacies. In the second part, a variety of ‘alternative’ clusters (organicly formed clusters most of which are not recongnized by the government) supplies with us the possibilities of rethinking the so-called ‘cluster development strategy’ in terms of what kind of spaces are appropriate for use by clusters? Who should manage them and in what format? And ultimately what are their relationship with the rest of the city should be defined?

Relevância:

60.00% 60.00%

Publicador:

Resumo:

The paper introduces the underlying principles and the general features of a meta-method (MAP method – Management & Analysis of Projects) developed as part of and used in various research, education and professional development programmes at ESC Lille. This method aims at providing effective and efficient structure and process for acting and learning in various complex, uncertain and ambiguous managerial situations (projects, programmes, portfolios). The paper is organized in three parts. In a first part, I propose to revisit the dominant vision of the project management knowledge field, based on the assumptions they are not addressing adequately current business and management contexts and situations, and that competencies in management of entrepreneurial activities are the sources of creation of value for organisations. Then, grounded on the new suggested perspective, the second part presents the underlying concepts supporting MAP method seen as a ‘convention generator' and how this meta-method inextricably links learning and practice in addressing managerial situations. The third part describes example of application, illustrating with a brief case study how the method integrates Project Management Governance, and gives few examples of use in Management Education and Professional Development.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Australian law teachers are increasingly recognising that psychological distress is an issue for our students. This article describes how the Queensland University of Technology Law School is reforming its curriculum to promote student psychological well-being. Part I of the article examines the literature on law student psychological distress in Australia. It is suggested that cross-sectional and longitudinal studies undertaken in Australia provide us with different, but equally important, information with respect to law student psychological well-being. Part II describes a subject in the QUT Law School - Lawyering and Dispute Resolution – which has been specifically designed as one response to declines in law student psychological well-being. Part III then considers two key elements of the design of the subject: introducing students to the idea of a positive professional identity, and introducing students to non-adversarial lawyering and the positive role of lawyers in society as dispute resolvers. These two areas of focus specifically promote law student psychological well-being by encouraging students to engage with elements of positive psychology – in particular, hope and optimism.