980 resultados para Uniformly Convex


Relevância:

10.00% 10.00%

Publicador:

Resumo:

This thesis presents an original approach to parametric speech coding at rates below 1 kbitsjsec, primarily for speech storage applications. Essential processes considered in this research encompass efficient characterization of evolutionary configuration of vocal tract to follow phonemic features with high fidelity, representation of speech excitation using minimal parameters with minor degradation in naturalness of synthesized speech, and finally, quantization of resulting parameters at the nominated rates. For encoding speech spectral features, a new method relying on Temporal Decomposition (TD) is developed which efficiently compresses spectral information through interpolation between most steady points over time trajectories of spectral parameters using a new basis function. The compression ratio provided by the method is independent of the updating rate of the feature vectors, hence allows high resolution in tracking significant temporal variations of speech formants with no effect on the spectral data rate. Accordingly, regardless of the quantization technique employed, the method yields a high compression ratio without sacrificing speech intelligibility. Several new techniques for improving performance of the interpolation of spectral parameters through phonetically-based analysis are proposed and implemented in this research, comprising event approximated TD, near-optimal shaping event approximating functions, efficient speech parametrization for TD on the basis of an extensive investigation originally reported in this thesis, and a hierarchical error minimization algorithm for decomposition of feature parameters which significantly reduces the complexity of the interpolation process. Speech excitation in this work is characterized based on a novel Multi-Band Excitation paradigm which accurately determines the harmonic structure in the LPC (linear predictive coding) residual spectra, within individual bands, using the concept 11 of Instantaneous Frequency (IF) estimation in frequency domain. The model yields aneffective two-band approximation to excitation and computes pitch and voicing with high accuracy as well. New methods for interpolative coding of pitch and gain contours are also developed in this thesis. For pitch, relying on the correlation between phonetic evolution and pitch variations during voiced speech segments, TD is employed to interpolate the pitch contour between critical points introduced by event centroids. This compresses pitch contour in the ratio of about 1/10 with negligible error. To approximate gain contour, a set of uniformly-distributed Gaussian event-like functions is used which reduces the amount of gain information to about 1/6 with acceptable accuracy. The thesis also addresses a new quantization method applied to spectral features on the basis of statistical properties and spectral sensitivity of spectral parameters extracted from TD-based analysis. The experimental results show that good quality speech, comparable to that of conventional coders at rates over 2 kbits/sec, can be achieved at rates 650-990 bits/sec.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This study is conducted within the IS-Impact Research Track at Queensland University of Technology (QUT). The goal of the IS-Impact Track is, "to develop the most widely employed model for benchmarking information systems in organizations for the joint benefit of both research and practice" (Gable et al, 2006). IS-Impact is defined as "a measure at a point in time, of the stream of net benefits from the IS [Information System], to date and anticipated, as perceived by all key-user-groups" (Gable Sedera and Chan, 2008). Track efforts have yielded the bicameral IS-Impact measurement model; the "impact" half includes Organizational-Impact and Individual-Impact dimensions; the "quality" half includes System-Quality and Information-Quality dimensions. The IS-Impact model, by design, is intended to be robust, simple and generalisable, to yield results that are comparable across time, stakeholders, different systems and system contexts. The model and measurement approach employs perceptual measures and an instrument that is relevant to key stakeholder groups, thereby enabling the combination or comparison of stakeholder perspectives. Such a validated and widely accepted IS-Impact measurement model has both academic and practical value. It facilitates systematic operationalisation of a main dependent variable in research (IS-Impact), which can also serve as an important independent variable. For IS management practice it provides a means to benchmark and track the performance of information systems in use. From examination of the literature, the study proposes that IS-Impact is an Analytic Theory. Gregor (2006) defines Analytic Theory simply as theory that ‘says what is’, base theory that is foundational to all other types of theory. The overarching research question thus is "Does IS-Impact positively manifest the attributes of Analytic Theory?" In order to address this question, we must first answer the question "What are the attributes of Analytic Theory?" The study identifies the main attributes of analytic theory as: (1) Completeness, (2) Mutual Exclusivity, (3) Parsimony, (4) Appropriate Hierarchy, (5) Utility, and (6) Intuitiveness. The value of empirical research in Information Systems is often assessed along the two main dimensions - rigor and relevance. Those Analytic Theory attributes associated with the ‘rigor’ of the IS-Impact model; namely, completeness, mutual exclusivity, parsimony and appropriate hierarchy, have been addressed in prior research (e.g. Gable et al, 2008). Though common tests of rigor are widely accepted and relatively uniformly applied (particularly in relation to positivist, quantitative research), attention to relevance has seldom been given the same systematic attention. This study assumes a mainly practice perspective, and emphasises the methodical evaluation of the Analytic Theory ‘relevance’ attributes represented by the Utility and Intuitiveness of the IS-Impact model. Thus, related research questions are: "Is the IS-Impact model intuitive to practitioners?" and "Is the IS-Impact model useful to practitioners?" March and Smith (1995), identify four outputs of Design Science: constructs, models, methods and instantiations (Design Science research may involve one or more of these). IS-Impact can be viewed as a design science model, composed of Design Science constructs (the four IS-Impact dimensions and the two model halves), and instantiations in the form of management information (IS-Impact data organised and presented for management decision making). In addition to methodically evaluating the Utility and Intuitiveness of the IS-Impact model and its constituent constructs, the study aims to also evaluate the derived management information. Thus, further research questions are: "Is the IS-Impact derived management information intuitive to practitioners?" and "Is the IS-Impact derived management information useful to practitioners? The study employs a longitudinal design entailing three surveys over 4 years (the 1st involving secondary data) of the Oracle-Financials application at QUT, interspersed with focus groups involving senior financial managers. The study too entails a survey of Financials at four other Australian Universities. The three focus groups respectively emphasise: (1) the IS-Impact model, (2) the 2nd survey at QUT (descriptive), and (3) comparison across surveys within QUT, and between QUT and the group of Universities. Aligned with the track goal of producing IS-Impact scores that are highly comparable, the study also addresses the more specific utility-related questions, "Is IS-Impact derived management information a useful comparator across time?" and "Is IS-Impact derived management information a useful comparator across universities?" The main contribution of the study is evidence of the utility and intuitiveness of IS-Impact to practice, thereby further substantiating the practical value of the IS-Impact approach; and also thereby motivating continuing and further research on the validity of IS-Impact, and research employing the ISImpact constructs in descriptive, predictive and explanatory studies. The study also has value methodologically as an example of relatively rigorous attention to relevance. A further key contribution is the clarification and instantiation of the full set of analytic theory attributes.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The present paper motivates the study of mind change complexity for learning minimal models of length-bounded logic programs. It establishes ordinal mind change complexity bounds for learnability of these classes both from positive facts and from positive and negative facts. Building on Angluin’s notion of finite thickness and Wright’s work on finite elasticity, Shinohara defined the property of bounded finite thickness to give a sufficient condition for learnability of indexed families of computable languages from positive data. This paper shows that an effective version of Shinohara’s notion of bounded finite thickness gives sufficient conditions for learnability with ordinal mind change bound, both in the context of learnability from positive data and for learnability from complete (both positive and negative) data. Let Omega be a notation for the first limit ordinal. Then, it is shown that if a language defining framework yields a uniformly decidable family of languages and has effective bounded finite thickness, then for each natural number m >0, the class of languages defined by formal systems of length <= m: • is identifiable in the limit from positive data with a mind change bound of Omega (power)m; • is identifiable in the limit from both positive and negative data with an ordinal mind change bound of Omega × m. The above sufficient conditions are employed to give an ordinal mind change bound for learnability of minimal models of various classes of length-bounded Prolog programs, including Shapiro’s linear programs, Arimura and Shinohara’s depth-bounded linearly covering programs, and Krishna Rao’s depth-bounded linearly moded programs. It is also noted that the bound for learning from positive data is tight for the example classes considered.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Australian privacy law regulates how government agencies and private sector organisations collect, store and use personal information. A coherent conceptual basis of personal information is an integral requirement of information privacy law as it determines what information is regulated. A 2004 report conducted on behalf of the UK’s Information Commissioner (the 'Booth Report') concluded that there was no coherent definition of personal information currently in operation because different data protection authorities throughout the world conceived the concept of personal information in different ways. The authors adopt the models developed by the Booth Report to examine the conceptual basis of statutory definitions of personal information in Australian privacy laws. Research findings indicate that the definition of personal information is not construed uniformly in Australian privacy laws and that different definitions rely upon different classifications of personal information. A similar situation is evident in a review of relevant case law. Despite this, the authors conclude the article by asserting that a greater jurisprudential discourse is required based on a coherent conceptual framework to ensure the consistent development of Australian privacy law.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This study investigated a novel drug delivery system (DDS), consisting of polycaprolactone (PCL) or polycaprolactone 20% tricalcium phosphate (PCL-TCP) biodegradable scaffolds, fibrin Tisseel sealant and recombinant bone morphogenetic protein-2 (rhBMP-2) for bone regeneration. PCL and PCL-TCP-fibrin composites displayed a loading efficiency of 70% and 43%, respectively. Fluorescence and scanning electron microscopy revealed sparse clumps of rhBMP-2 particles, non-uniformly distributed on the rods’ surface of PCL-fibrin composites. In contrast, individual rhBMP-2 particles were evident and uniformly distributed on the rods’ surface of the PCL-TCP-fibrin composites. PCL-fibrin composites loaded with 10 and 20 μg/ml rhBMP-2 demonstrated a triphasic release profile as quantified by an enzyme-linked immunosorbent assay (ELISA). This consisted of burst releases at 2 h, and days 7 and 16. A biphasic release profile was observed for PCL-TCP-fibrin composites loaded with 10 μg/ml rhBMP-2, consisting of burst releases at 2 h and day 14. PCL-TCP-fibrin composites loaded with 20 μg/ml rhBMP-2 showed a tri-phasic release profile, consisting of burst releases at 2 h, and days 10 and 21. We conclude that the addition of TCP caused a delay in rhBMP-2 release. Sodium dodecyl sulphate polyacrylamide gel electrophoresis (SDS-PAGE) and alkaline phosphatase assay verified the stability and bioactivity of eluted rhBMP-2 at all time points

Relevância:

10.00% 10.00%

Publicador:

Resumo:

An investigation of cylindrical iron rods burning in pressurised oxygen under microgravity conditions is presented. It has been shown that, under similar experimental conditions, the melting rate of a burning, cylindrical iron rod is higher in microgravity than in normal gravity by a factor of 1.8 ± 0.3. This paper presents microanalysis of quenched samples obtained in a microgravity environment in a 2.0 s duration drop tower facility in Brisbane, Australia. These images indicate that the solid/liquid interface is highly convex in reduced gravity, compared to the planar geometry typically observed in normal gravity, which increases the contact area between liquid and solid phases by a factor of 1.7 ± 0.1. Thus, there is good agreement between the proportional increase in solid/liquid interface surface area and melting rate in microgravity. This indicates that the cause of the increased melting rates for cylindrical iron rods burning in microgravity is altered interfacial geometry at the solid/liquid interface.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Engineered tissue grafts, which mimic the spatial variations of cell density and extracellular matrix present in native tissues, could facilitate more efficient tissue regeneration and integration. We previously demonstrated that cells could be uniformly seeded throughout a 3D scaffold having a random pore architecture using a perfusion bioreactor2. In this work, we aimed to generate 3D constructs with defined cell distributions based on rapid prototyped scaffolds manufactured with a controlled gradient in porosity. Computational models were developed to assess the influence of fluid flow, associated with pore architecture and perfusion regime, on the resulting cell distribution.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Stereotypes of salespeople are common currency in US media outlets and research suggests that these stereotypes are uniformly negative. However, there is no reason to expect that stereotypes will be consistent across cultures. The present paper provides the first empirical examination of salesperson stereotypes in an Asian country, specifically Taiwan. Using accepted psychological methods, Taiwanese salesperson stereotypes are found to be twofold, with a negative stereotype being quite congruent with existing US stereotypes, but also a positive stereotype, which may be related to the specific culture of Taiwan.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In the UK, Singapore, Canada, New Zealand and Australia, as in many other jurisdictions, charity law is rooted in the common law and anchored on the Statute of Charitable Uses 1601. The Pemsel classification of charitable purposes was uniformly accepted, and together with a shared and growing pool of judicial precedents, aided by the ‘spirit and intendment’ rule, has subsequently allowed the law to develop along much the same lines. In recent years, all the above jurisdictions have embarked on law reform processes designed to strengthen regulatory processes and to statutorily define and encode common law concepts. The reform outcomes are now to be found in a batch of national charity statutes which reflect interesting differences in the extent to which their respective governments have been prepared to balance the modernising of charitable purposes and other common law concepts alongside the customary concern to tighten the regulatory framework.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

For fruit flies, fully ripe fruit is preferred for adult oviposition and is superior for offspring performance over unripe or ripening fruit. Because not all parts of a single fruit ripen simultaneously, the opportunity exists for adult fruit flies to selectively choose riper parts of a fruit for oviposition and such selection, if it occurs, could positively influence offspring performance. Such fine scale host variation is rarely considered in fruit fly ecology, however, especially for polyphagous species which are, by definition, considered to be generalist host users. Here we study the adult oviposition preference/larval performance relationship of the Oriental fruit fly, Bactrocera dorsalis (Hendel) (Diptera: Tephritidae), a highly polyphagous pest species, at the “within-fruit” level to see if such a host use pattern occurs. We recorded the number of oviposition attempts that female flies made into three fruit portions (top, middle and bottom), and larval behavior and development within different fruit portions for ripening (color change) and fully-ripe mango, Mangifera indica L. (Anacardiaceae). Results indicate that female B. dorsalis do not oviposit uniformly across a mango fruit, but lay most often in the top (i.e., stalk end) of fruit and least in the bottom portion, regardless of ripening stage. There was no evidence of larval feeding site preference or performance (development time, pupal weight, percent pupation) being influenced by fruit portion, within or across the fruit ripening stages. There was, however, a very significant effect on adult emergence rate from pupae, with adult emergence rate from pupae from the bottom of ripening mango being approximately only 50% of the adult emergence rate from the top of ripening fruit, or from both the top and bottom of fully-ripe fruit. Differences in mechanical (firmness) and chemical (total soluble solids, titratable acidity, total non-structural carbohydrates) traits between different fruit portions were correlated with adult fruit utilisation. Our results support a positive adult preference/offspring performance relationship at within-fruit level for B. dorsalis. The fine level of host discrimination exhibited by B. dorsalis is at odds with the general perception that, as a polyphagous herbivore, the fly should show very little discrimination in its host use behavior.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The success rate of carrier phase ambiguity resolution (AR) is the probability that the ambiguities are successfully fixed to their correct integer values. In existing works, an exact success rate formula for integer bootstrapping estimator has been used as a sharp lower bound for the integer least squares (ILS) success rate. Rigorous computation of success rate for the more general ILS solutions has been considered difficult, because of complexity of the ILS ambiguity pull-in region and computational load of the integration of the multivariate probability density function. Contributions of this work are twofold. First, the pull-in region mathematically expressed as the vertices of a polyhedron is represented by a multi-dimensional grid, at which the cumulative probability can be integrated with the multivariate normal cumulative density function (mvncdf) available in Matlab. The bivariate case is studied where the pull-region is usually defined as a hexagon and the probability is easily obtained using mvncdf at all the grid points within the convex polygon. Second, the paper compares the computed integer rounding and integer bootstrapping success rates, lower and upper bounds of the ILS success rates to the actual ILS AR success rates obtained from a 24 h GPS data set for a 21 km baseline. The results demonstrate that the upper bound probability of the ILS AR probability given in the existing literatures agrees with the actual ILS success rate well, although the success rate computed with integer bootstrapping method is a quite sharp approximation to the actual ILS success rate. The results also show that variations or uncertainty of the unit–weight variance estimates from epoch to epoch will affect the computed success rates from different methods significantly, thus deserving more attentions in order to obtain useful success probability predictions.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Detection of Region of Interest (ROI) in a video leads to more efficient utilization of bandwidth. This is because any ROIs in a given frame can be encoded in higher quality than the rest of that frame, with little or no degradation of quality from the perception of the viewers. Consequently, it is not necessary to uniformly encode the whole video in high quality. One approach to determine ROIs is to use saliency detectors to locate salient regions. This paper proposes a methodology for obtaining ground truth saliency maps to measure the effectiveness of ROI detection by considering the role of user experience during the labelling process of such maps. User perceptions can be captured and incorporated into the definition of salience in a particular video, taking advantage of human visual recall within a given context. Experiments with two state-of-the-art saliency detectors validate the effectiveness of this approach to validating visual saliency in video. This paper will provide the relevant datasets associated with the experiments.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

We study the regret of optimal strategies for online convex optimization games. Using von Neumann's minimax theorem, we show that the optimal regret in this adversarial setting is closely related to the behavior of the empirical minimization algorithm in a stochastic process setting: it is equal to the maximum, over joint distributions of the adversary's action sequence, of the difference between a sum of minimal expected losses and the minimal empirical loss. We show that the optimal regret has a natural geometric interpretation, since it can be viewed as the gap in Jensen's inequality for a concave functional--the minimizer over the player's actions of expected loss--defined on a set of probability distributions. We use this expression to obtain upper and lower bounds on the regret of an optimal strategy for a variety of online learning problems. Our method provides upper bounds without the need to construct a learning algorithm; the lower bounds provide explicit optimal strategies for the adversary. Peter L. Bartlett, Alexander Rakhlin

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Many of the classification algorithms developed in the machine learning literature, including the support vector machine and boosting, can be viewed as minimum contrast methods that minimize a convex surrogate of the 0–1 loss function. The convexity makes these algorithms computationally efficient. The use of a surrogate, however, has statistical consequences that must be balanced against the computational virtues of convexity. To study these issues, we provide a general quantitative relationship between the risk as assessed using the 0–1 loss and the risk as assessed using any nonnegative surrogate loss function. We show that this relationship gives nontrivial upper bounds on excess risk under the weakest possible condition on the loss function—that it satisfies a pointwise form of Fisher consistency for classification. The relationship is based on a simple variational transformation of the loss function that is easy to compute in many applications. We also present a refined version of this result in the case of low noise, and show that in this case, strictly convex loss functions lead to faster rates of convergence of the risk than would be implied by standard uniform convergence arguments. Finally, we present applications of our results to the estimation of convergence rates in function classes that are scaled convex hulls of a finite-dimensional base class, with a variety of commonly used loss functions.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Kernel-based learning algorithms work by embedding the data into a Euclidean space, and then searching for linear relations among the embedded data points. The embedding is performed implicitly, by specifying the inner products between each pair of points in the embedding space. This information is contained in the so-called kernel matrix, a symmetric and positive semidefinite matrix that encodes the relative positions of all points. Specifying this matrix amounts to specifying the geometry of the embedding space and inducing a notion of similarity in the input space - classical model selection problems in machine learning. In this paper we show how the kernel matrix can be learned from data via semidefinite programming (SDP) techniques. When applied to a kernel matrix associated with both training and test data this gives a powerful transductive algorithm -using the labeled part of the data one can learn an embedding also for the unlabeled part. The similarity between test points is inferred from training points and their labels. Importantly, these learning problems are convex, so we obtain a method for learning both the model class and the function without local minima. Furthermore, this approach leads directly to a convex method for learning the 2-norm soft margin parameter in support vector machines, solving an important open problem.