973 resultados para statistical speaker models
Resumo:
Information systems have developed to the stage that there is plenty of data available in most organisations but there are still major problems in turning that data into information for management decision making. This thesis argues that the link between decision support information and transaction processing data should be through a common object model which reflects the real world of the organisation and encompasses the artefacts of the information system. The CORD (Collections, Objects, Roles and Domains) model is developed which is richer in appropriate modelling abstractions than current Object Models. A flexible Object Prototyping tool based on a Semantic Data Storage Manager has been developed which enables a variety of models to be stored and experimented with. A statistical summary table model COST (Collections of Objects Statistical Table) has been developed within CORD and is shown to be adequate to meet the modelling needs of Decision Support and Executive Information Systems. The COST model is supported by a statistical table creator and editor COSTed which is also built on top of the Object Prototyper and uses the CORD model to manage its metadata.
Resumo:
This paper uses a feminist post-structuralist approach to examine the gendered identities of a sample of British business leaders in Britain. While recent national surveys offer many material reasons why women are acutely under-represented as business leaders, the role of language is rarely addressed. This paper explores the ways in which ten senior women and men construct their sense of leadership identities through the medium of interview narratives. Drawing upon two poststructuralist models of analysis (Derrida’s 1987 theory of deconstruction and Bakhtin’s 1927/1981 concept of double-voiced discourse), the paper shows how both females and males are able to shift pragmatically between interwoven corporate discourses, which demand competing cultural allegiances from one moment to the next, allegiances constantly tested by the rapid change and uncertainty that characterise global business. While male leaders experience a relative freedom of movement between different cultural discourses, female leaders are circumscribed by negative and reductive representations of female speech and behaviour. In sum, senior women are required constantly to observe, review, police and repair their use of leadership language, which potentially undermines their confidence and authority as leaders.
Resumo:
This thesis describes the procedure and results from four years research undertaken through the IHD (Interdisciplinary Higher Degrees) Scheme at Aston University in Birmingham, sponsored by the SERC (Science and Engineering Research Council) and Monk Dunstone Associates, Chartered Quantity Surveyors. A stochastic networking technique VERT (Venture Evaluation and Review Technique) was used to model the pre-tender costs of public health, heating ventilating, air-conditioning, fire protection, lifts and electrical installations within office developments. The model enabled the quantity surveyor to analyse, manipulate and explore complex scenarios which previously had defied ready mathematical analysis. The process involved the examination of historical material costs, labour factors and design performance data. Components and installation types were defined and formatted. Data was updated and adjusted using mechanical and electrical pre-tender cost indices and location, selection of contractor, contract sum, height and site condition factors. Ranges of cost, time and performance data were represented by probability density functions and defined by constant, uniform, normal and beta distributions. These variables and a network of the interrelationships between services components provided the framework for analysis. The VERT program, in this particular study, relied upon Monte Carlo simulation to model the uncertainties associated with pre-tender estimates of all possible installations. The computer generated output in the form of relative and cumulative frequency distributions of current element and total services costs, critical path analyses and details of statistical parameters. From this data alternative design solutions were compared, the degree of risk associated with estimates was determined, heuristics were tested and redeveloped, and cost significant items were isolated for closer examination. The resultant models successfully combined cost, time and performance factors and provided the quantity surveyor with an appreciation of the cost ranges associated with the various engineering services design options.
Resumo:
Common approaches to IP-traffic modelling have featured the use of stochastic models, based on the Markov property, which can be classified into black box and white box models based on the approach used for modelling traffic. White box models, are simple to understand, transparent and have a physical meaning attributed to each of the associated parameters. To exploit this key advantage, this thesis explores the use of simple classic continuous-time Markov models based on a white box approach, to model, not only the network traffic statistics but also the source behaviour with respect to the network and application. The thesis is divided into two parts: The first part focuses on the use of simple Markov and Semi-Markov traffic models, starting from the simplest two-state model moving upwards to n-state models with Poisson and non-Poisson statistics. The thesis then introduces the convenient to use, mathematically derived, Gaussian Markov models which are used to model the measured network IP traffic statistics. As one of the most significant contributions, the thesis establishes the significance of the second-order density statistics as it reveals that, in contrast to first-order density, they carry much more unique information on traffic sources and behaviour. The thesis then exploits the use of Gaussian Markov models to model these unique features and finally shows how the use of simple classic Markov models coupled with use of second-order density statistics provides an excellent tool for capturing maximum traffic detail, which in itself is the essence of good traffic modelling. The second part of the thesis, studies the ON-OFF characteristics of VoIP traffic with reference to accurate measurements of the ON and OFF periods, made from a large multi-lingual database of over 100 hours worth of VoIP call recordings. The impact of the language, prosodic structure and speech rate of the speaker on the statistics of the ON-OFF periods is analysed and relevant conclusions are presented. Finally, an ON-OFF VoIP source model with log-normal transitions is contributed as an ideal candidate to model VoIP traffic and the results of this model are compared with those of previously published work.
Resumo:
Microfluidics has recently emerged as a new method of manufacturing liposomes, which allows for reproducible mixing in miliseconds on the nanoliter scale. Here we investigate microfluidics-based manufacturing of liposomes. The aim of these studies was to assess the parameters in a microfluidic process by varying the total flow rate (TFR) and the flow rate ratio (FRR) of the solvent and aqueous phases. Design of experiment and multivariate data analysis were used for increased process understanding and development of predictive and correlative models. High FRR lead to the bottom-up synthesis of liposomes, with a strong correlation with vesicle size, demonstrating the ability to in-process control liposomes size; the resulting liposome size correlated with the FRR in the microfluidics process, with liposomes of 50 nm being reproducibly manufactured. Furthermore, we demonstrate the potential of a high throughput manufacturing of liposomes using microfluidics with a four-fold increase in the volumetric flow rate, maintaining liposome characteristics. The efficacy of these liposomes was demonstrated in transfection studies and was modelled using predictive modeling. Mathematical modelling identified FRR as the key variable in the microfluidic process, with the highest impact on liposome size, polydispersity and transfection efficiency. This study demonstrates microfluidics as a robust and high-throughput method for the scalable and highly reproducible manufacture of size-controlled liposomes. Furthermore, the application of statistically based process control increases understanding and allows for the generation of a design-space for controlled particle characteristics.
Resumo:
Recently, temporal and statistical properties of quasi-CW fiber lasers have attracted a great attention. In particular, properties of Raman fiber laser (RFLs) have been studied both numerically and experimentally [1,2]. Experimental investigation is more challengeable, as the full generation optical bandwidth (typically hundreds of GHz for RFLs) is much bigger than real-time bandwidth of oscilloscopes (up to 60GHz for the newest models). So experimentally measured time dynamics is highly bandwidth averaged and do not provide precise information about overall statistical properties. To overpass this, one can use the spectral filtering technique to study temporal and statistical properties within optical bandwidth comparable with measurement bandwidth [3] or indirect measurements [4]. Ytterbium-doped fiber lasers (YDFL) are more suitable for experimental investigation, as their generation spectrum usually 10 times narrower. Moreover, recently ultra-narrow-band generation has been demonstrated in YDFL [5] which provides in principle possibility to measure time dynamics and statistics in real time using conventional oscilloscopes. © 2013 IEEE.
Resumo:
Around 80% of the 63 million people in the UK live in urban areas where demand for affordable housing is highest. Supply of new dwellings is a long way short of demand and with an average annual replacement rate of 0.5% more than 80% of the existing residential housing stock will still be in use by 2050. A high proportion of owner-occupiers, a weak private rental sector and lack of sustainable financing models render England’s housing market one of the least responsive in the developed world. As an exploratory research the purpose of this paper is to examine the provision of social housing in the United Kingdom with a particular focus on England, and to set out implications for housing associations delivering sustainable community development. The paper is based on an analysis of historical data series (Census data), current macro-economic data and population projections to 2033. The paper identifies a chronic undersupply of affordable housing in England which is likely to be exacerbated by demographic development, changes in household composition and reduced availability of finance to develop new homes. Based on the housing market trends analysed in this paper opportunities are identified for policy makers to remove barriers to the delivery of new affordable homes and for social housing providers to evolve their business models by taking a wider role in sustainable community development.
Resumo:
The accurate identification of T-cell epitopes remains a principal goal of bioinformatics within immunology. As the immunogenicity of peptide epitopes is dependent on their binding to major histocompatibility complex (MHC) molecules, the prediction of binding affinity is a prerequisite to the reliable prediction of epitopes. The iterative self-consistent (ISC) partial-least-squares (PLS)-based additive method is a recently developed bioinformatic approach for predicting class II peptide−MHC binding affinity. The ISC−PLS method overcomes many of the conceptual difficulties inherent in the prediction of class II peptide−MHC affinity, such as the binding of a mixed population of peptide lengths due to the open-ended class II binding site. The method has applications in both the accurate prediction of class II epitopes and the manipulation of affinity for heteroclitic and competitor peptides. The method is applied here to six class II mouse alleles (I-Ab, I-Ad, I-Ak, I-As, I-Ed, and I-Ek) and included peptides up to 25 amino acids in length. A series of regression equations highlighting the quantitative contributions of individual amino acids at each peptide position was established. The initial model for each allele exhibited only moderate predictivity. Once the set of selected peptide subsequences had converged, the final models exhibited a satisfactory predictive power. Convergence was reached between the 4th and 17th iterations, and the leave-one-out cross-validation statistical terms - q2, SEP, and NC - ranged between 0.732 and 0.925, 0.418 and 0.816, and 1 and 6, respectively. The non-cross-validated statistical terms r2 and SEE ranged between 0.98 and 0.995 and 0.089 and 0.180, respectively. The peptides used in this study are available from the AntiJen database (http://www.jenner.ac.uk/AntiJen). The PLS method is available commercially in the SYBYL molecular modeling software package. The resulting models, which can be used for accurate T-cell epitope prediction, will be made freely available online (http://www.jenner.ac.uk/MHCPred).
Resumo:
Mathematical Subject Classification 2010:26A33, 33E99, 15A52, 62E15.
Resumo:
MSC 2010: 15A15, 15A52, 33C60, 33E12, 44A20, 62E15 Dedicated to Professor R. Gorenflo on the occasion of his 80th birthday
Resumo:
2000 Mathematics Subject Classification: 62G08, 62P30.
Resumo:
We build the Conditional Least Squares Estimator of 0 based on the observation of a single trajectory of {Zk,Ck}k, and give conditions ensuring its strong consistency. The particular case of general linear models according to 0=( 0, 0) and among them, regenerative processes, are studied more particularly. In this frame, we may also prove the consistency of the estimator of 0 although it belongs to an asymptotic negligible part of the model, and the asymptotic law of the estimator may also be calculated.
Resumo:
2000 Mathematics Subject Classification: 62P10, 92C40
Resumo:
2000 Mathematics Subject Classification: 62P10, 92C20
Resumo:
A pénzügyi modellek jelentős része feltételezi a piacok hatékony működését. Ennek következtében számos tudományos kutatás központi témája volt a piacok hatékonyságának tesztelése és ennek igazolása, esetleg cáfolata. Ezen próbálkozások azonban mind a mai napig eredménytelenül zárultak. A tesztelések nyomán a kutatások a termékek áralakulásából indultak ki, és a hozamokat ezen keresztül elemezték. Az elmúlt években azonban a fókusz átterelődött az árak alakulásáról egy elemibb tényezőre, az ajánlati könyvre. Ugyanis végső soron az ajánlatvezérelt piacokon az árakat az ajánlati könyvbe benyújtott megbízások alakulása fogja meghatározni. Mivel a tőzsdék jelentős része ajánlatvezérelt piacként működik, ezért érdemesnek tartották a kutatók, hogy inkább az ajánlati könyv alakulásának statisztikai jellemzőit elemezzék, hátha az eredményre vezet, és sikerül közelebb jutni a hatékony piacok elméletének igazolásához vagy cáfolatához. Jelen tanulmány célja az, hogy az eddig megjelent tudományos kutatások alapján ismertesse az ajánlati könyv alapvető statisztikai tulajdonságait, és rávilágítson arra: mindez valójában hozzájárult-e a hatékony piacok elméletének igazolásához? ______ Most of the fi nancial models assume that markets are effi cient. As a result, numerous scientifi c researchers were focused on testing the effi cient market hypothesis, and tried to prove, or deny it. However, all these attempts are still unsuccessful. During these researches, the analyses of the effi cient market hypothesis were based on the price evolution of a certain asset, and through this the returns were examined. In the recent years the research interest has changed, and instead of analyzing the returns, a more primary factor got into focus, namely the limit order book. The reason is that on order driven markets the prices and the order sizes in the limit order book infl uence the price evolution on the market. Since a notable number of stock markets operate as an order driven market, the researchers thought that it worth analyzing the statistical properties of the limit order book, because maybe it will get us closer to the proof of the effi cient market hypothesis. The purpose of this study is to summarize the statistical properties of the limit order book, based on the scientifi c works published so far. The study would like to highlight whether these studies contributed to the proof or disproof of the effi cient market hypothesis.