938 resultados para PART II


Relevância:

60.00% 60.00%

Publicador:

Resumo:

In this article we survey relevant international literature on the issue of parental liability and responsibility for the crimes of young offenders. In addition, as a starting point for needed cross-jurisdictional research, we focus on different approaches that have been taken to making parents responsible for youth crime in Australia and Canada. This comparative analysis of Australian and Canadian legislative and policy approaches is situated within a broader discussion of arguments about parental responsibility, the ‘punitive turn’ in youth justice, and cross-jurisdictional criminal justice policy transfer and convergence. One unexpected finding of our literature survey is the relatively sparse attention given to the issue of parental responsibility for youth crime in legal and criminological literature compared to the attention it receives in the media and popular-public culture. In Part I we examine the different views that have been articulated in the social science literature for and against parental responsibility laws, along with arguments that have been made about why such laws have been enacted in an increasing number of Western countries in recent years. In Part II, we situate our comparative study of Australian and Canadian legislative and policy approaches within a broader discussion of arguments about the ‘punitive turn’ in youth justice, responsibilisation, and cross-jurisdictional criminal justice policy transfer and convergence. In Part III, we identify and examine the scope of different parental responsibility laws that have been enacted in Australia and Canada; noting significant differences in the manner and extent to which parental responsibility laws and policies have been invoked as part of the solution to dealing with youth crime. In our concluding discussion, in Part IV, we try to speculate on some of the reasons for these differences and set an agenda for needed future research on the topic.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

The concept of "fair basing" is widely acknowledged as a difficult area of patent law. This article maps the development of fair basing law to demonstrate how some of the difficulties have arisen. Part I of the article traces the development of the branches of patent law that were swept under the nomenclature of "fair basing" by British legislation in 1949. It looks at the early courts' approach to patent construction, examines the early origin of fair basing and what it was intended to achiever. Part II of the article considers the modern interpretation of fair basing, which provides a striking contrast to its historical context. Without any consistent judicial approach to construction the doctrine has developed inappropriately, giving rise to both over-strict and over-generous approaches.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Part I of this book covers the commercial and contractual background to technology licensing agreements. Part II discusses the European Community's new regime on the application and enforcement of Article 81 to technology licensing agreements. EC Council Regulation 1/2003 replaced the Council Regulation 17/1962 and repealed the system under which restrictive agreements and practices could be notified to the EC Commission. A new Commission regulation on technology transfer agreements, Regulation 772/2004. These two enactments required consequential amendments to the chapters in Part III where the usual terms of technology licensing agreements are analysed and exemplified by reference to decided cases.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

This thesis deals with the problem of the instantaneous frequency (IF) estimation of sinusoidal signals. This topic plays significant role in signal processing and communications. Depending on the type of the signal, two major approaches are considered. For IF estimation of single-tone or digitally-modulated sinusoidal signals (like frequency shift keying signals) the approach of digital phase-locked loops (DPLLs) is considered, and this is Part-I of this thesis. For FM signals the approach of time-frequency analysis is considered, and this is Part-II of the thesis. In part-I we have utilized sinusoidal DPLLs with non-uniform sampling scheme as this type is widely used in communication systems. The digital tanlock loop (DTL) has introduced significant advantages over other existing DPLLs. In the last 10 years many efforts have been made to improve DTL performance. However, this loop and all of its modifications utilizes Hilbert transformer (HT) to produce a signal-independent 90-degree phase-shifted version of the input signal. Hilbert transformer can be realized approximately using a finite impulse response (FIR) digital filter. This realization introduces further complexity in the loop in addition to approximations and frequency limitations on the input signal. We have tried to avoid practical difficulties associated with the conventional tanlock scheme while keeping its advantages. A time-delay is utilized in the tanlock scheme of DTL to produce a signal-dependent phase shift. This gave rise to the time-delay digital tanlock loop (TDTL). Fixed point theorems are used to analyze the behavior of the new loop. As such TDTL combines the two major approaches in DPLLs: the non-linear approach of sinusoidal DPLL based on fixed point analysis, and the linear tanlock approach based on the arctan phase detection. TDTL preserves the main advantages of the DTL despite its reduced structure. An application of TDTL in FSK demodulation is also considered. This idea of replacing HT by a time-delay may be of interest in other signal processing systems. Hence we have analyzed and compared the behaviors of the HT and the time-delay in the presence of additive Gaussian noise. Based on the above analysis, the behavior of the first and second-order TDTLs has been analyzed in additive Gaussian noise. Since DPLLs need time for locking, they are normally not efficient in tracking the continuously changing frequencies of non-stationary signals, i.e. signals with time-varying spectra. Nonstationary signals are of importance in synthetic and real life applications. An example is the frequency-modulated (FM) signals widely used in communication systems. Part-II of this thesis is dedicated for the IF estimation of non-stationary signals. For such signals the classical spectral techniques break down, due to the time-varying nature of their spectra, and more advanced techniques should be utilized. For the purpose of instantaneous frequency estimation of non-stationary signals there are two major approaches: parametric and non-parametric. We chose the non-parametric approach which is based on time-frequency analysis. This approach is computationally less expensive and more effective in dealing with multicomponent signals, which are the main aim of this part of the thesis. A time-frequency distribution (TFD) of a signal is a two-dimensional transformation of the signal to the time-frequency domain. Multicomponent signals can be identified by multiple energy peaks in the time-frequency domain. Many real life and synthetic signals are of multicomponent nature and there is little in the literature concerning IF estimation of such signals. This is why we have concentrated on multicomponent signals in Part-H. An adaptive algorithm for IF estimation using the quadratic time-frequency distributions has been analyzed. A class of time-frequency distributions that are more suitable for this purpose has been proposed. The kernels of this class are time-only or one-dimensional, rather than the time-lag (two-dimensional) kernels. Hence this class has been named as the T -class. If the parameters of these TFDs are properly chosen, they are more efficient than the existing fixed-kernel TFDs in terms of resolution (energy concentration around the IF) and artifacts reduction. The T-distributions has been used in the IF adaptive algorithm and proved to be efficient in tracking rapidly changing frequencies. They also enables direct amplitude estimation for the components of a multicomponent

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Bone generation by autogenous cell transplantation in combination with a biodegradable scaffold is one of the most promising techniques being developed in craniofacial surgery. The objective of this combined in vitro and in vivo study was to evaluate the morphology and osteogenic differentiation of bone marrow derived mesenchymal progenitor cells and calvarial osteoblasts in a two-dimensional (2-D) and three-dimensional (3-D) culture environment (Part I of this study) and their potential in combination with a biodegradable scaffold to reconstruct critical-size calvarial defects in an autologous animal model [Part II of this study; see Schantz, J.T., et al. Tissue Eng. 2003;9(Suppl. 1):S-127-S-139; this issue]. New Zealand White rabbits were used to isolate osteoblasts from calvarial bone chips and bone marrow stromal cells from iliac crest bone marrow aspirates. Multilineage differentiation potential was evaluated in a 2-D culture setting. After amplification, the cells were seeded within a fibrin matrix into a 3-D polycaprolactone (PCL) scaffold system. The constructs were cultured for up to 3 weeks in vitro and assayed for cell attachment and proliferation using phase-contrast light, confocal laser, and scanning electron microscopy and the MTS cell metabolic assay. Osteogenic differentiation was analyzed by determining the expression of alkaline phosphatase (ALP) and osteocalcin. The bone marrow-derived progenitor cells demonstrated the potential to be induced to the osteogenic, adipogenic, and chondrogenic pathways. In a 3-D environment, cell-seeded PCL scaffolds evaluated by confocal laser microscopy revealed continuous cell proliferation and homogeneous cell distribution within the PCL scaffolds. On osteogenic induction mesenchymal progenitor cells (12 U/L) produce significantly higher (p < 0.05) ALP activity than do osteoblasts (2 U/L); however, no significant differences were found in osteocalcin expression. In conclusion, this study showed that the combination of a mechanically stable synthetic framework (PCL scaffolds) and a biomimetic hydrogel (fibrin glue) provides a potential matrix for bone tissue-engineering applications. Comparison of osteogenic differentiation between the two mesenchymal cell sources revealed a similar pattern.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Signal Processing (SP) is a subject of central importance in engineering and the applied sciences. Signals are information-bearing functions, and SP deals with the analysis and processing of signals (by dedicated systems) to extract or modify information. Signal processing is necessary because signals normally contain information that is not readily usable or understandable, or which might be disturbed by unwanted sources such as noise. Although many signals are non-electrical, it is common to convert them into electrical signals for processing. Most natural signals (such as acoustic and biomedical signals) are continuous functions of time, with these signals being referred to as analog signals. Prior to the onset of digital computers, Analog Signal Processing (ASP) and analog systems were the only tool to deal with analog signals. Although ASP and analog systems are still widely used, Digital Signal Processing (DSP) and digital systems are attracting more attention, due in large part to the significant advantages of digital systems over the analog counterparts. These advantages include superiority in performance,s peed, reliability, efficiency of storage, size and cost. In addition, DSP can solve problems that cannot be solved using ASP, like the spectral analysis of multicomonent signals, adaptive filtering, and operations at very low frequencies. Following the recent developments in engineering which occurred in the 1980's and 1990's, DSP became one of the world's fastest growing industries. Since that time DSP has not only impacted on traditional areas of electrical engineering, but has had far reaching effects on other domains that deal with information such as economics, meteorology, seismology, bioengineering, oceanology, communications, astronomy, radar engineering, control engineering and various other applications. This book is based on the Lecture Notes of Associate Professor Zahir M. Hussain at RMIT University (Melbourne, 2001-2009), the research of Dr. Amin Z. Sadik (at QUT & RMIT, 2005-2008), and the Note of Professor Peter O'Shea at Queensland University of Technology. Part I of the book addresses the representation of analog and digital signals and systems in the time domain and in the frequency domain. The core topics covered are convolution, transforms (Fourier, Laplace, Z. Discrete-time Fourier, and Discrete Fourier), filters, and random signal analysis. There is also a treatment of some important applications of DSP, including signal detection in noise, radar range estimation, banking and financial applications, and audio effects production. Design and implementation of digital systems (such as integrators, differentiators, resonators and oscillators are also considered, along with the design of conventional digital filters. Part I is suitable for an elementary course in DSP. Part II (which is suitable for an advanced signal processing course), considers selected signal processing systems and techniques. Core topics covered are the Hilbert transformer, binary signal transmission, phase-locked loops, sigma-delta modulation, noise shaping, quantization, adaptive filters, and non-stationary signal analysis. Part III presents some selected advanced DSP topics. We hope that this book will contribute to the advancement of engineering education and that it will serve as a general reference book on digital signal processing.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

In this paper we construct a mathematical model for the genetic regulatory network of the lactose operon. This mathematical model contains transcription and translation of the lactose permease (LacY) and a reporter gene GFP. The probability of transcription of LacY is determined by 14 binding states out of all 50 possible binding states of the lactose operon based on the quasi-steady-state assumption for the binding reactions, while we calculate the probability of transcription for the reporter gene GFP based on 5 binding states out of 19 possible binding states because the binding site O2 is missing for this reporter gene. We have tested different mechanisms for the transport of thio-methylgalactoside (TMG) and the effect of different Hill coefficients on the simulated LacY expression levels. Using this mathematical model we have realized one of the experimental results with different LacY concentrations, which are induced by different concentrations of TMG.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Public transportation is an environment with great potential for applying innovative ubiquitous computing services to enhance user experiences. This paper provides the underpinning rationale for research that will be looking at how real-time passenger information system deployed by transit authorities can provide a core platform to improve commuters’ user experiences during all stages of their journey. The proposal builds on this platform to inform the design and development of innovative social media, mobile computing and geospatial information applications, with the hope to create fun and meaningful experiences for passengers during their everyday travel. Furthermore, we present the findings of our pilot study that aims to offer a better understanding of passengers’ activities and social interactions during their daily commute.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Australian law teachers are increasingly recognising that psychological distress is an issue for our students. This article describes how the Queensland University of Technology Law School is reforming its curriculum to promote student psychological well-being. Part I of the article examines the literature on law student psychological distress in Australia. It is suggested that cross-sectional and longitudinal studies undertaken in Australia provide us with different, but equally important, information with respect to law student psychological well-being. Part II describes a subject in the QUT Law School - Lawyering and Dispute Resolution – which has been specifically designed as one response to declines in law student psychological well-being. Part III then considers two key elements of the design of the subject: introducing students to the idea of a positive professional identity, and introducing students to non-adversarial lawyering and the positive role of lawyers in society as dispute resolvers. These two areas of focus specifically promote law student psychological well-being by encouraging students to engage with elements of positive psychology – in particular, hope and optimism.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Advances in algorithms for approximate sampling from a multivariable target function have led to solutions to challenging statistical inference problems that would otherwise not be considered by the applied scientist. Such sampling algorithms are particularly relevant to Bayesian statistics, since the target function is the posterior distribution of the unobservables given the observables. In this thesis we develop, adapt and apply Bayesian algorithms, whilst addressing substantive applied problems in biology and medicine as well as other applications. For an increasing number of high-impact research problems, the primary models of interest are often sufficiently complex that the likelihood function is computationally intractable. Rather than discard these models in favour of inferior alternatives, a class of Bayesian "likelihoodfree" techniques (often termed approximate Bayesian computation (ABC)) has emerged in the last few years, which avoids direct likelihood computation through repeated sampling of data from the model and comparing observed and simulated summary statistics. In Part I of this thesis we utilise sequential Monte Carlo (SMC) methodology to develop new algorithms for ABC that are more efficient in terms of the number of model simulations required and are almost black-box since very little algorithmic tuning is required. In addition, we address the issue of deriving appropriate summary statistics to use within ABC via a goodness-of-fit statistic and indirect inference. Another important problem in statistics is the design of experiments. That is, how one should select the values of the controllable variables in order to achieve some design goal. The presences of parameter and/or model uncertainty are computational obstacles when designing experiments but can lead to inefficient designs if not accounted for correctly. The Bayesian framework accommodates such uncertainties in a coherent way. If the amount of uncertainty is substantial, it can be of interest to perform adaptive designs in order to accrue information to make better decisions about future design points. This is of particular interest if the data can be collected sequentially. In a sense, the current posterior distribution becomes the new prior distribution for the next design decision. Part II of this thesis creates new algorithms for Bayesian sequential design to accommodate parameter and model uncertainty using SMC. The algorithms are substantially faster than previous approaches allowing the simulation properties of various design utilities to be investigated in a more timely manner. Furthermore the approach offers convenient estimation of Bayesian utilities and other quantities that are particularly relevant in the presence of model uncertainty. Finally, Part III of this thesis tackles a substantive medical problem. A neurological disorder known as motor neuron disease (MND) progressively causes motor neurons to no longer have the ability to innervate the muscle fibres, causing the muscles to eventually waste away. When this occurs the motor unit effectively ‘dies’. There is no cure for MND, and fatality often results from a lack of muscle strength to breathe. The prognosis for many forms of MND (particularly amyotrophic lateral sclerosis (ALS)) is particularly poor, with patients usually only surviving a small number of years after the initial onset of disease. Measuring the progress of diseases of the motor units, such as ALS, is a challenge for clinical neurologists. Motor unit number estimation (MUNE) is an attempt to directly assess underlying motor unit loss rather than indirect techniques such as muscle strength assessment, which generally is unable to detect progressions due to the body’s natural attempts at compensation. Part III of this thesis builds upon a previous Bayesian technique, which develops a sophisticated statistical model that takes into account physiological information about motor unit activation and various sources of uncertainties. More specifically, we develop a more reliable MUNE method by applying marginalisation over latent variables in order to improve the performance of a previously developed reversible jump Markov chain Monte Carlo sampler. We make other subtle changes to the model and algorithm to improve the robustness of the approach.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Image representations derived from simplified models of the primary visual cortex (V1), such as HOG and SIFT, elicit good performance in a myriad of visual classification tasks including object recognition/detection, pedestrian detection and facial expression classification. A central question in the vision, learning and neuroscience communities regards why these architectures perform so well. In this paper, we offer a unique perspective to this question by subsuming the role of V1-inspired features directly within a linear support vector machine (SVM). We demonstrate that a specific class of such features in conjunction with a linear SVM can be reinterpreted as inducing a weighted margin on the Kronecker basis expansion of an image. This new viewpoint on the role of V1-inspired features allows us to answer fundamental questions on the uniqueness and redundancies of these features, and offer substantial improvements in terms of computational and storage efficiency.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

A one-time program is a hypothetical device by which a user may evaluate a circuit on exactly one input of his choice, before the device self-destructs. One-time programs cannot be achieved by software alone, as any software can be copied and re-run. However, it is known that every circuit can be compiled into a one-time program using a very basic hypothetical hardware device called a one-time memory. At first glance it may seem that quantum information, which cannot be copied, might also allow for one-time programs. But it is not hard to see that this intuition is false: one-time programs for classical or quantum circuits based solely on quantum information do not exist, even with computational assumptions. This observation raises the question, "what assumptions are required to achieve one-time programs for quantum circuits?" Our main result is that any quantum circuit can be compiled into a one-time program assuming only the same basic one-time memory devices used for classical circuits. Moreover, these quantum one-time programs achieve statistical universal composability (UC-security) against any malicious user. Our construction employs methods for computation on authenticated quantum data, and we present a new quantum authentication scheme called the trap scheme for this purpose. As a corollary, we establish UC-security of a recent protocol for delegated quantum computation.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Collisions between distinct road users (e.g. drivers and motorcyclists) make a substantial contribution to the road trauma burden. Although evidence suggests distinct road users interpret the same road situations differently, it is not clear how road users’ situation awareness differs, nor is it clear which differences might lead to conflicts. This article presents the findings from an on-road study which examined driver, cyclist, motorcyclist and pedestrian situation awareness at intersections. The findings suggest that situation awareness at intersection is markedly different across the four road user groups studied, and that some of these differences may create conflicts between the different road users. The findings also suggest that the causes of the differences identified relate to road design and road user experience. In closing, the key role of road design and training in supporting safe interactions between distinct road users is discussed.