950 resultados para Turner, Bradley


Relevância:

10.00% 10.00%

Publicador:

Resumo:

This research aims to develop a capabilities-based conceptual framework in order to study the stage-specific innovation problems associated with the dynamic growth process of university spin-outs (hereafter referred to as USOs) in China. Based on the existing literature, pilot cases and five critical cases, this study attempts to explore the interconnections between the entrepreneurial innovation problems and the configuration of innovative capabilities (that acquire, mobilise and re-configure the key resources) throughout the lifecycle of a firm in four growth phases. This paper aims to contribute to the literature in a holistic manner by providing a theoretical discussion of USOs' development through adding evidence from a rapid growth emerging economy. To date, studies that have investigated the development of USOs in China recognised the heterogeneity of USOs in terms of capabilities still remain sparse. Addressing this research gap will be of great interest to entrepreneurs, policy makers and venture investors. © Copyright 2010 Inderscience Enterprises Ltd.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

There are many methods for decomposing signals into a sum of amplitude and frequency modulated sinusoids. In this paper we take a new estimation based approach. Identifying the problem as ill-posed, we show how to regularize the solution by imposing soft constraints on the amplitude and phase variables of the sinusoids. Estimation proceeds using a version of Kalman smoothing. We evaluate the method on synthetic and natural, clean and noisy signals, showing that it outperforms previous decompositions, but at a higher computational cost. © 2012 IEEE.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The airflow and thermal stratification produced by a localised heat source located at floor level in a closed room is of considerable practical interest and is commonly referred to as a 'filling box'. In rooms with low aspect ratios H/R ≲ 1 (room height H to characteristic horizontal dimension R) the thermal plume spreads laterally on reaching the ceiling and a descending horizontal 'front' forms separating a stably stratified, warm upper region from cooler air below. The stratification is well predicted for H/R ≲ 1 by the original filling box model of Baines and Turner (J. Fluid. Mech. 37 (1968) 51). This model represents a somewhat idealised situation of a plume rising from a point source of buoyancy alone-in particular the momentum flux at the source is zero. In practical situations, real sources of heating and cooling in a ventilation system often include initial fluxes of both buoyancy and momentum, e.g. where a heating system vents warm air into a space. This paper describes laboratory experiments to determine the dependence of the 'front' formation and stratification on the source momentum and buoyancy fluxes of a single source, and on the location and relative strengths of two sources from which momentum and buoyancy fluxes were supplied separately. For a single source with a non-zero input of momentum, the rate of descent of the front is more rapid than for the case of zero source momentum flux and increases with increasing momentum input. Increasing the source momentum flux effectively increases the height of the enclosure, and leads to enhanced overturning motions and finally to complete mixing for highly momentum-driven flows. Stratified flows may be maintained by reducing the aspect ratio of the enclosure. At these low aspect ratios different long-time behaviour is observed depending on the nature of the heat input. A constant heat flux always produces a stratified interior at large times. On the other hand, a constant temperature supply ultimately produces a well-mixed space at the supply temperature. For separate sources of momentum and buoyancy, the developing stratification is shown to be strongly dependent on the separation of the sources and their relative strengths. Even at small separation distances the stratification initially exhibits horizontal inhomogeneity with localised regions of warm fluid (from the buoyancy source) and cool fluid. This inhomogeneity is less pronounced as the strength of one source is increased relative to the other. Regardless of the strengths of the sources, a constant buoyancy flux source dominates after sufficiently large times, although the strength of the momentum source determines whether the enclosure is initially well mixed (strong momentum source) or stably stratified (weak momentum source). © 2001 Elsevier Science Ltd. All rights reserved.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The visual system must learn to infer the presence of objects and features in the world from the images it encounters, and as such it must, either implicitly or explicitly, model the way these elements interact to create the image. Do the response properties of cells in the mammalian visual system reflect this constraint? To address this question, we constructed a probabilistic model in which the identity and attributes of simple visual elements were represented explicitly and learnt the parameters of this model from unparsed, natural video sequences. After learning, the behaviour and grouping of variables in the probabilistic model corresponded closely to functional and anatomical properties of simple and complex cells in the primary visual cortex (V1). In particular, feature identity variables were activated in a way that resembled the activity of complex cells, while feature attribute variables responded much like simple cells. Furthermore, the grouping of the attributes within the model closely parallelled the reported anatomical grouping of simple cells in cat V1. Thus, this generative model makes explicit an interpretation of complex and simple cells as elements in the segmentation of a visual scene into basic independent features, along with a parametrisation of their moment-by-moment appearances. We speculate that such a segmentation may form the initial stage of a hierarchical system that progressively separates the identity and appearance of more articulated visual elements, culminating in view-invariant object recognition.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The brain extracts useful features from a maelstrom of sensory information, and a fundamental goal of theoretical neuroscience is to work out how it does so. One proposed feature extraction strategy is motivated by the observation that the meaning of sensory data, such as the identity of a moving visual object, is often more persistent than the activation of any single sensory receptor. This notion is embodied in the slow feature analysis (SFA) algorithm, which uses “slowness” as an heuristic by which to extract semantic information from multi-dimensional time-series. Here, we develop a probabilistic interpretation of this algorithm showing that inference and learning in the limiting case of a suitable probabilistic model yield exactly the results of SFA. Similar equivalences have proved useful in interpreting and extending comparable algorithms such as independent component analysis. For SFA, we use the equivalent probabilistic model as a conceptual spring-board, with which to motivate several novel extensions to the algorithm.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Amplitude demodulation is an ill-posed problem and so it is natural to treat it from a Bayesian viewpoint, inferring the most likely carrier and envelope under probabilistic constraints. One such treatment is Probabilistic Amplitude Demodulation (PAD), which, whilst computationally more intensive than traditional approaches, offers several advantages. Here we provide methods for estimating the uncertainty in the PAD-derived envelopes and carriers, and for learning free-parameters like the time-scale of the envelope. We show how the probabilistic approach can naturally handle noisy and missing data. Finally, we indicate how to extend the model to signals which contain multiple modulators and carriers.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

We study unsupervised learning in a probabilistic generative model for occlusion. The model uses two types of latent variables: one indicates which objects are present in the image, and the other how they are ordered in depth. This depth order then determines how the positions and appearances of the objects present, specified in the model parameters, combine to form the image. We show that the object parameters can be learnt from an unlabelled set of images in which objects occlude one another. Exact maximum-likelihood learning is intractable. However, we show that tractable approximations to Expectation Maximization (EM) can be found if the training images each contain only a small number of objects on average. In numerical experiments it is shown that these approximations recover the correct set of object parameters. Experiments on a novel version of the bars test using colored bars, and experiments on more realistic data, show that the algorithm performs well in extracting the generating causes. Experiments based on the standard bars benchmark test for object learning show that the algorithm performs well in comparison to other recent component extraction approaches. The model and the learning algorithm thus connect research on occlusion with the research field of multiple-causes component extraction methods.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Natural sounds are structured on many time-scales. A typical segment of speech, for example, contains features that span four orders of magnitude: Sentences ($\sim1$s); phonemes ($\sim10$−$1$ s); glottal pulses ($\sim 10$−$2$s); and formants ($\sim 10$−$3$s). The auditory system uses information from each of these time-scales to solve complicated tasks such as auditory scene analysis [1]. One route toward understanding how auditory processing accomplishes this analysis is to build neuroscience-inspired algorithms which solve similar tasks and to compare the properties of these algorithms with properties of auditory processing. There is however a discord: Current machine-audition algorithms largely concentrate on the shorter time-scale structures in sounds, and the longer structures are ignored. The reason for this is two-fold. Firstly, it is a difficult technical problem to construct an algorithm that utilises both sorts of information. Secondly, it is computationally demanding to simultaneously process data both at high resolution (to extract short temporal information) and for long duration (to extract long temporal information). The contribution of this work is to develop a new statistical model for natural sounds that captures structure across a wide range of time-scales, and to provide efficient learning and inference algorithms. We demonstrate the success of this approach on a missing data task.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Computational models of visual cortex, and in particular those based on sparse coding, have enjoyed much recent attention. Despite this currency, the question of how sparse or how over-complete a sparse representation should be, has gone without principled answer. Here, we use Bayesian model-selection methods to address these questions for a sparse-coding model based on a Student-t prior. Having validated our methods on toy data, we find that natural images are indeed best modelled by extremely sparse distributions; although for the Student-t prior, the associated optimal basis size is only modestly over-complete.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Auditory scene analysis is extremely challenging. One approach, perhaps that adopted by the brain, is to shape useful representations of sounds on prior knowledge about their statistical structure. For example, sounds with harmonic sections are common and so time-frequency representations are efficient. Most current representations concentrate on the shorter components. Here, we propose representations for structures on longer time-scales, like the phonemes and sentences of speech. We decompose a sound into a product of processes, each with its own characteristic time-scale. This demodulation cascade relates to classical amplitude demodulation, but traditional algorithms fail to realise the representation fully. A new approach, probabilistic amplitude demodulation, is shown to out-perform the established methods, and to easily extend to representation of a full demodulation cascade.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Variational methods are a key component of the approximate inference and learning toolbox. These methods fill an important middle ground, retaining distributional information about uncertainty in latent variables, unlike maximum a posteriori methods (MAP), and yet generally requiring less computational time than Monte Carlo Markov Chain methods. In particular the variational Expectation Maximisation (vEM) and variational Bayes algorithms, both involving variational optimisation of a free-energy, are widely used in time-series modelling. Here, we investigate the success of vEM in simple probabilistic time-series models. First we consider the inference step of vEM, and show that a consequence of the well-known compactness property of variational inference is a failure to propagate uncertainty in time, thus limiting the usefulness of the retained distributional information. In particular, the uncertainty may appear to be smallest precisely when the approximation is poorest. Second, we consider parameter learning and analytically reveal systematic biases in the parameters found by vEM. Surprisingly, simpler variational approximations (such a mean-field) can lead to less bias than more complicated structured approximations.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Human listeners can identify vowels regardless of speaker size, although the sound waves for an adult and a child speaking the ’same’ vowel would differ enormously. The differences are mainly due to the differences in vocal tract length (VTL) and glottal pulse rate (GPR) which are both related to body size. Automatic speech recognition machines are notoriously bad at understanding children if they have been trained on the speech of an adult. In this paper, we propose that the auditory system adapts its analysis of speech sounds, dynamically and automatically to the GPR and VTL of the speaker on a syllable-to-syllable basis. We illustrate how this rapid adaptation might be performed with the aid of a computational version of the auditory image model, and we propose that an auditory preprocessor of this form would improve the robustness of speech recognisers.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This paper describes the University of Cambridge, Engineering Design Centre's (EDC) case for inclusive design, based on 10 years of research, promotion and knowledge transfer. In summary, inclusive design applies an understanding of customer diversity to inform decisions throughout the development process, in order to better satisfy the needs of more people. Products that are more inclusive can reach a wider market, improve customer satisfaction and drive business success. The rapidly ageing population increases the importance of this approach. The case presented here has helped to convince BT, Nestlé and others to adopt an inclusive approach.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Designers often assume that their users will have some digital technological prior experience. We examined these levels of prior experience by surveying frequency and ease of technology use with a range of technology products. 362 people participated as part of a UK nationwide larger survey of people's capabilities and characteristics to inform product design. We found that frequency and self-reported ease of use are indeed correlated for all of the products. Furthermore, both frequency and ease of use declined significantly with age for most of the products. In fact, 29% of the over 65s had never or rarely used any of the products, except for digital TV. We conclude that interfaces need to be designed carefully to avoid implicit assumptions about users' previous technology use.