100 resultados para Reproducing kernel


Relevância:

10.00% 10.00%

Publicador:

Resumo:

This thesis deals with the problem of the instantaneous frequency (IF) estimation of sinusoidal signals. This topic plays significant role in signal processing and communications. Depending on the type of the signal, two major approaches are considered. For IF estimation of single-tone or digitally-modulated sinusoidal signals (like frequency shift keying signals) the approach of digital phase-locked loops (DPLLs) is considered, and this is Part-I of this thesis. For FM signals the approach of time-frequency analysis is considered, and this is Part-II of the thesis. In part-I we have utilized sinusoidal DPLLs with non-uniform sampling scheme as this type is widely used in communication systems. The digital tanlock loop (DTL) has introduced significant advantages over other existing DPLLs. In the last 10 years many efforts have been made to improve DTL performance. However, this loop and all of its modifications utilizes Hilbert transformer (HT) to produce a signal-independent 90-degree phase-shifted version of the input signal. Hilbert transformer can be realized approximately using a finite impulse response (FIR) digital filter. This realization introduces further complexity in the loop in addition to approximations and frequency limitations on the input signal. We have tried to avoid practical difficulties associated with the conventional tanlock scheme while keeping its advantages. A time-delay is utilized in the tanlock scheme of DTL to produce a signal-dependent phase shift. This gave rise to the time-delay digital tanlock loop (TDTL). Fixed point theorems are used to analyze the behavior of the new loop. As such TDTL combines the two major approaches in DPLLs: the non-linear approach of sinusoidal DPLL based on fixed point analysis, and the linear tanlock approach based on the arctan phase detection. TDTL preserves the main advantages of the DTL despite its reduced structure. An application of TDTL in FSK demodulation is also considered. This idea of replacing HT by a time-delay may be of interest in other signal processing systems. Hence we have analyzed and compared the behaviors of the HT and the time-delay in the presence of additive Gaussian noise. Based on the above analysis, the behavior of the first and second-order TDTLs has been analyzed in additive Gaussian noise. Since DPLLs need time for locking, they are normally not efficient in tracking the continuously changing frequencies of non-stationary signals, i.e. signals with time-varying spectra. Nonstationary signals are of importance in synthetic and real life applications. An example is the frequency-modulated (FM) signals widely used in communication systems. Part-II of this thesis is dedicated for the IF estimation of non-stationary signals. For such signals the classical spectral techniques break down, due to the time-varying nature of their spectra, and more advanced techniques should be utilized. For the purpose of instantaneous frequency estimation of non-stationary signals there are two major approaches: parametric and non-parametric. We chose the non-parametric approach which is based on time-frequency analysis. This approach is computationally less expensive and more effective in dealing with multicomponent signals, which are the main aim of this part of the thesis. A time-frequency distribution (TFD) of a signal is a two-dimensional transformation of the signal to the time-frequency domain. Multicomponent signals can be identified by multiple energy peaks in the time-frequency domain. Many real life and synthetic signals are of multicomponent nature and there is little in the literature concerning IF estimation of such signals. This is why we have concentrated on multicomponent signals in Part-H. An adaptive algorithm for IF estimation using the quadratic time-frequency distributions has been analyzed. A class of time-frequency distributions that are more suitable for this purpose has been proposed. The kernels of this class are time-only or one-dimensional, rather than the time-lag (two-dimensional) kernels. Hence this class has been named as the T -class. If the parameters of these TFDs are properly chosen, they are more efficient than the existing fixed-kernel TFDs in terms of resolution (energy concentration around the IF) and artifacts reduction. The T-distributions has been used in the IF adaptive algorithm and proved to be efficient in tracking rapidly changing frequencies. They also enables direct amplitude estimation for the components of a multicomponent

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Biodiesel is a renewable fuel that has been shown to reduce many exhaust emissions, except oxides of nitrogen (NOx), in diesel engine cars. This is of special concern in inner urban areas that are subject to strict environmental regulations, such as EURO norms. Also, the use of pure biodiesel (B100) is inhibited because of its higher NOx emissions compared to petroleum diesel fuel. The aim of this present work is to investigate the effect of the iodine value and cetane number of various biodiesel fuels obtained from different feed stocks on the combustion and NOx emission characteristics of a direct injection (DI) diesel engine. The biodiesel fuels were chosen from various feed stocks such as coconut, palm kernel, mahua (Madhuca indica), pongamia pinnata, jatropha curcas, rice bran, and sesame seed oils. The experimental results show an approximately linear relationship between iodine value and NOx emissions. The biodiesels obtained from coconut and palm kernel showed lower NOx levels than diesel, but other biodiesels showed an increase in NOx. It was observed that the nature of the fatty acids of the biodiesel fuels had a significant influence on the NOx emissions. Also, the cetane numbers of the biodiesel fuels are affected both premixed combustion and the combustion rate, which further affected the amount of NOx formation. It was concluded that NOx emissions are influenced by many parameters of biodiesel fuels, particularly the iodine value and cetane number.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This paper presents an extended study on the implementation of support vector machine(SVM) based speaker verification in systems that employ continuous progressive model adaptation using the weight-based factor analysis model. The weight-based factor analysis model compensates for session variations in unsupervised scenarios by incorporating trial confidence measures in the general statistics used in the inter-session variability modelling process. Employing weight-based factor analysis in Gaussian mixture models (GMM) was recently found to provide significant performance gains to unsupervised classification. Further improvements in performance were found through the integration of SVM-based classification in the system by means of GMM supervectors. This study focuses particularly on the way in which a client is represented in the SVM kernel space using single and multiple target supervectors. Experimental results indicate that training client SVMs using a single target supervector maximises performance while exhibiting a certain robustness to the inclusion of impostor training data in the model. Furthermore, the inclusion of low-scoring target trials in the adaptation process is investigated where they were found to significantly aid performance.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This paper derives from research-in-progress intending both Design Research (DR) and Design Science (DS) outputs; the former a management decision tool based in IS-Impact (Gable et al. 2008) kernel theory; the latter being methodological learnings deriving from synthesis of the literature and reflection on the DR ‘case study’ experience. The paper introduces a generic, detailed and pragmatic DS ‘Research Roadmap’ or methodology, deriving at this stage primarily from synthesis and harmonization of relevant concepts identified through systematic archival analysis of related literature. The scope of the Roadmap too has been influenced by the parallel study aim to undertake DR applying and further evolving the Roadmap. The Roadmap is presented in attention to the dearth of detailed guidance available to novice Researchers in Design Science Research (DSR), and though preliminary, is expected to evolve and gradually be substantiated through experience of its application. A key distinction of the Roadmap from other DSR methods is its breadth of coverage of published DSR concepts and activities; its detail and scope. It represents a useful synthesis and integration of otherwise highly disparate DSR-related concepts.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Issues of equity and inequity have always been part of employment relations and are a fundamental part of the industrial landscape. For example, in most countries in the nineteenth century and a large part of the twentieth century women and members of ethnic groups (often a minority in the workforce) were barred from certain occupations, industries or work locations, and received less pay than the dominant male ethnic group for the same work. In recent decades attention has been focused on issues of equity between groups, predominantly women and different ethnic groups in the workforce. This has been embodied in industrial legislation, for example in equal pay for women and men, and frequently in specific equity legislation. In this way a whole new area of law and associated workplace practice has developed in many countries. Historically, employment relations and industrial relations research has not examined employment issues disaggregated by gender or ethnic group. Born out of concern with conflict and regulation at the workplace, studies tended to concentrate on white, male, unionized workers in manufacturing and heavy industry (Ackers, 2002, p. 4). The influential systems model crafted by Dunlop (1958) gave rise to The discipline’s preoccupation with the ‘problem of order’ [which] ensures the invisibility of women, not only because women have generally been less successful in mobilizing around their own needs and discontents, but more profoundly because this approach identifies the employment relationship as the ultimate source of power and conflict at work (Forrest, 1993, p. 410). While ‘the system approach does not deliberately exclude gender . . . by reproducing a very narrow research approach and understanding of issues of relevance for the research, gender is in general excluded or looked on as something of peripheral interest’ (Hansen, 2002, p. 198). However, long-lived patterns of gender segregation in occupations and industries, together with discriminatory access to work and social views about women and ethnic groups in the paid workforce, mean that the employment experience of women and ethnic groups is frequently quite different to that of men in the dominant ethnic group. Since the 1980s, research into women and employment has figured in the employment relations literature, but it is often relegated to a separate category in specific articles or book chapters, with women implicitly or explicitly seen as the atypical or exceptional worker (Hansen, 2002; Wajcman, 2000). The same conclusion can be reached for other groups with different labour force patterns and employment outcomes. This chapter proposes that awareness of equity issues is central to employment relations. Like industrial relations legislation and approaches, each country will have a unique set of equity policies and legislation, reflecting their history and culture. Yet while most books on employment and industrial relations deal with issues of equity in a separate chapter (most commonly on equity for women or more recently on ‘diversity’), the reality in the workplace is that all types of legislation and policies which impact on the wages and working conditions interact, and their impact cannot be disentangled one from another. When discussing equity in workplaces in the twenty-first century we are now faced with a plethora of different terms in English. Terms used include discrimination, equity, equal opportunity, affirmative action and diversity with all its variants (workplace diversity, managing diversity, and so on). There is a lack of agreed definitions, particularly when the terms are used outside of a legislative context. This ‘shifting linguistic terrain’ (Kennedy-Dubourdieu, 2006b, p. 3) varies from country to country and changes over time even within the one country. There is frequently a division made between equity and its related concepts and the range of expressions using the term ‘diversity’ (Wilson and Iles, 1999; Thomas and Ely, 1996). These present dilemmas for practitioners and researchers due to the amount and range of ideas prevalent – and the breadth of issues that are covered when we say ‘equity and diversity in employment’. To add to these dilemmas, the literature on equity and diversity has become bifurcated: the literature on workplace diversity/management diversity appears largely in the business literature while that on equity in employment appears frequently in legal and industrial relations journals. Workplaces of the twenty-first century differ from those of the nineteenth and twentieth century not only in the way they deal with individual and group differences but also in the way they interpret what are fair and equitable outcomes for different individuals and groups. These variations are the result of a range of social conditions, legislation and workplace constraints that have influenced the development of employment equity and the management of diversity. Attempts to achieve employment equity have primarily been dealt with through legislative means, and in the last fifty years this legislation has included elements of anti-discrimination, affirmative action, and equal employment opportunity in virtually all OECD countries (Mor Barak, 2005, pp. 17–52). Established on human rights and social justice principles, this legislation is based on the premise that systemic discrimination has and/or continues to exist in the labour force and particular groups of citizens have less advantageous employment outcomes. It is based on group identity, and employment equity programmes in general apply across all workplaces and are mandatory. The more recent notions of diversity in the workplace are based on ideas coming principally from the USA in the 1980s which have spread widely in the Western world since the 1990s. Broadly speaking, diversity ideas focus on individual differences either on their own or in concert with the idea of group differences. The diversity literature is based on a business case: that is diversity is profitable in a variety of ways for business, and generally lacks a social justice or human rights justification (Burgess et al., 2009, pp. 81–2). Managing diversity is represented at the organizational level as a voluntary and local programme. This chapter discusses some major models and theories for equity and diversity. It begins by charting the history of ideas about equity in employment and then briefly discusses what is meant by equality and equity. The chapter then analyses the major debates about the ways in which equity can be achieved. The more recent ideas about diversity are then discussed, including the history of these ideas and the principles which guide this concept. The following section discusses both major frameworks of equity and diversity. The chapter then raises some ways in which insights from the equity and diversity literature can inform employment relations. Finally, the future of equity and diversity ideas is discussed.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

We propose new bounds on the error of learning algorithms in terms of a data-dependent notion of complexity. The estimates we establish give optimal rates and are based on a local and empirical version of Rademacher averages, in the sense that the Rademacher averages are computed from the data, on a subset of functions with small empirical error. We present some applications to classification and prediction with convex function classes, and with kernel classes in particular.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

One of the nice properties of kernel classifiers such as SVMs is that they often produce sparse solutions. However, the decision functions of these classifiers cannot always be used to estimate the conditional probability of the class label. We investigate the relationship between these two properties and show that these are intimately related: sparseness does not occur when the conditional probabilities can be unambiguously estimated. We consider a family of convex loss functions and derive sharp asymptotic results for the fraction of data that becomes support vectors. This enables us to characterize the exact trade-off between sparseness and the ability to estimate conditional probabilities for these loss functions.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This paper presents a method of spatial sampling based on stratification by Local Moran’s I i calculated using auxiliary information. The sampling technique is compared to other design-based approaches including simple random sampling, systematic sampling on a regular grid, conditional Latin Hypercube sampling and stratified sampling based on auxiliary information, and is illustrated using two different spatial data sets. Each of the samples for the two data sets is interpolated using regression kriging to form a geostatistical map for their respective areas. The proposed technique is shown to be competitive in reproducing specific areas of interest with high accuracy.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Background The majority of peptide bonds in proteins are found to occur in the trans conformation. However, for proline residues, a considerable fraction of Prolyl peptide bonds adopt the cis form. Proline cis/trans isomerization is known to play a critical role in protein folding, splicing, cell signaling and transmembrane active transport. Accurate prediction of proline cis/trans isomerization in proteins would have many important applications towards the understanding of protein structure and function. Results In this paper, we propose a new approach to predict the proline cis/trans isomerization in proteins using support vector machine (SVM). The preliminary results indicated that using Radial Basis Function (RBF) kernels could lead to better prediction performance than that of polynomial and linear kernel functions. We used single sequence information of different local window sizes, amino acid compositions of different local sequences, multiple sequence alignment obtained from PSI-BLAST and the secondary structure information predicted by PSIPRED. We explored these different sequence encoding schemes in order to investigate their effects on the prediction performance. The training and testing of this approach was performed on a newly enlarged dataset of 2424 non-homologous proteins determined by X-Ray diffraction method using 5-fold cross-validation. Selecting the window size 11 provided the best performance for determining the proline cis/trans isomerization based on the single amino acid sequence. It was found that using multiple sequence alignments in the form of PSI-BLAST profiles could significantly improve the prediction performance, the prediction accuracy increased from 62.8% with single sequence to 69.8% and Matthews Correlation Coefficient (MCC) improved from 0.26 with single local sequence to 0.40. Furthermore, if coupled with the predicted secondary structure information by PSIPRED, our method yielded a prediction accuracy of 71.5% and MCC of 0.43, 9% and 0.17 higher than the accuracy achieved based on the singe sequence information, respectively. Conclusion A new method has been developed to predict the proline cis/trans isomerization in proteins based on support vector machine, which used the single amino acid sequence with different local window sizes, the amino acid compositions of local sequence flanking centered proline residues, the position-specific scoring matrices (PSSMs) extracted by PSI-BLAST and the predicted secondary structures generated by PSIPRED. The successful application of SVM approach in this study reinforced that SVM is a powerful tool in predicting proline cis/trans isomerization in proteins and biological sequence analysis.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The conventional manual power line corridor inspection processes that are used by most energy utilities are labor-intensive, time consuming and expensive. Remote sensing technologies represent an attractive and cost-effective alternative approach to these monitoring activities. This paper presents a comprehensive investigation into automated remote sensing based power line corridor monitoring, focusing on recent innovations in the area of increased automation of fixed-wing platforms for aerial data collection, and automated data processing for object recognition using a feature fusion process. Airborne automation is achieved by using a novel approach that provides improved lateral control for tracking corridors and automatic real-time dynamic turning for flying between corridor segments, we call this approach PTAGS. Improved object recognition is achieved by fusing information from multi-sensor (LiDAR and imagery) data and multiple visual feature descriptors (color and texture). The results from our experiments and field survey illustrate the effectiveness of the proposed aircraft control and feature fusion approaches.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Two decades after its inception, Latent Semantic Analysis(LSA) has become part and parcel of every modern introduction to Information Retrieval. For any tool that matures so quickly, it is important to check its lore and limitations, or else stagnation will set in. We focus here on the three main aspects of LSA that are well accepted, and the gist of which can be summarized as follows: (1) that LSA recovers latent semantic factors underlying the document space, (2) that such can be accomplished through lossy compression of the document space by eliminating lexical noise, and (3) that the latter can best be achieved by Singular Value Decomposition. For each aspect we performed experiments analogous to those reported in the LSA literature and compared the evidence brought to bear in each case. On the negative side, we show that the above claims about LSA are much more limited than commonly believed. Even a simple example may show that LSA does not recover the optimal semantic factors as intended in the pedagogical example used in many LSA publications. Additionally, and remarkably deviating from LSA lore, LSA does not scale up well: the larger the document space, the more unlikely that LSA recovers an optimal set of semantic factors. On the positive side, we describe new algorithms to replace LSA (and more recent alternatives as pLSA, LDA, and kernel methods) by trading its l2 space for an l1 space, thereby guaranteeing an optimal set of semantic factors. These algorithms seem to salvage the spirit of LSA as we think it was initially conceived.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Historically, determining the country of origin of a published work presented few challenges, because works were generally published physically – whether in print or otherwise – in a distinct location or few locations. However, publishing opportunities presented by new technologies mean that we now live in a world of simultaneous publication – works that are first published online are published simultaneously to every country in world in which there is Internet connectivity. While this is certainly advantageous for the dissemination and impact of information and creative works, it creates potential complications under the Berne Convention for the Protection of Literary and Artistic Works (“Berne Convention”), an international intellectual property agreement to which most countries in the world now subscribe. Under the Berne Convention’s national treatment provisions, rights accorded to foreign copyright works may not be subject to any formality, such as registration requirements (although member countries are free to impose formalities in relation to domestic copyright works). In Kernel Records Oy v. Timothy Mosley p/k/a Timbaland, et al. however, the Florida Southern District Court of the United States ruled that first publication of a work on the Internet via an Australian website constituted “simultaneous publication all over the world,” and therefore rendered the work a “United States work” under the definition in section 101 of the U.S. Copyright Act, subjecting the work to registration formality under section 411. This ruling is in sharp contrast with an earlier decision delivered by the Delaware District Court in Håkan Moberg v. 33T LLC, et al. which arrived at an opposite conclusion. The conflicting rulings of the U.S. courts reveal the problems posed by new forms of publishing online and demonstrate a compelling need for further harmonization between the Berne Convention, domestic laws and the practical realities of digital publishing. In this article, we argue that even if a work first published online can be considered to be simultaneously published all over the world it does not follow that any country can assert itself as the “country of origin” of the work for the purpose of imposing domestic copyright formalities. More specifically, we argue that the meaning of “United States work” under the U.S. Copyright Act should be interpreted in line with the presumption against extraterritorial application of domestic law to limit its application to only those works with a real and substantial connection to the United States. There are gaps in the Berne Convention’s articulation of “country of origin” which provide scope for judicial interpretation, at a national level, of the most pragmatic way forward in reconciling the goals of the Berne Convention with the practical requirements of domestic law. We believe that the uncertainties arising under the Berne Convention created by new forms of online publishing can be resolved at a national level by the sensible application of principles of statutory interpretation by the courts. While at the international level we may need a clearer consensus on what amounts to “simultaneous publication” in the digital age, state practice may mean that we do not yet need to explore textual changes to the Berne Convention.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Recent algorithms for monocular motion capture (MoCap) estimate weak-perspective camera matrices between images using a small subset of approximately-rigid points on the human body (i.e. the torso and hip). A problem with this approach, however, is that these points are often close to coplanar, causing canonical linear factorisation algorithms for rigid structure from motion (SFM) to become extremely sensitive to noise. In this paper, we propose an alternative solution to weak-perspective SFM based on a convex relaxation of graph rigidity. We demonstrate the success of our algorithm on both synthetic and real world data, allowing for much improved solutions to marker less MoCap problems on human bodies. Finally, we propose an approach to solve the two-fold ambiguity over bone direction using a k-nearest neighbour kernel density estimator.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Companies and their services are being increasingly exposed to global business networks and Internet-based ondemand services. Much of the focus is on flexible orchestration and consumption of services, beyond ownership and operational boundaries of services. However, ways in which third-parties in the “global village” can seamlessly self-create new offers out of existing services remains open. This paper proposes a framework for service provisioning in global business networks that allows an open-ended set of techniques for extending services through a rich, multi-tooling environment. The Service Provisioning Management Framework, as such, supports different modeling techniques, through supportive tools, allowing different parts of services to be integrated into new contexts. Integration of service user interfaces, business processes, operational interfaces and business object are supported. The integration specifications that arise from service extensions are uniformly reflected through a kernel technique, the Service Integration Technique. Thus, the framework preserves coherence of service provisioning tasks without constraining the modeling techniques needed for extending different aspects of services.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The way in which private schools use rhetoric in their communications offers important insights into how these organizational sites persuade audiences and leverage marketplace advantage in the context of contemporary educational platforms. Through systemic analysis of rhetorical strategies employed in 65 ‘elite’ school prospectuses in Australia, this paper contributes to understandings of the ways schools’ communications draw on broader cultural politics in order to shape meanings and interactions among organizational actors. We identify six strategies consistently used by schools to this end: identification, juxtapositioning, bolstering or self-promotion, partial reporting, self-expansion, and reframing or reversal. We argue that, in the context of marketization and privatization discourses in twenty-first-century western education, these strategies attempt to subvert potentially threatening discourses, in the process actively reproducing broader economic and social privilege and inequalities.