914 resultados para probabilistic model
Resumo:
Numerous tools and techniques have been developed to eliminate or reduce waste and carry out lean concepts in the manufacturing environment. However, appropriate lean tools need to be selected and implemented in order to fulfil the manufacturer needs within their budgetary constraints. As a result, it is important to identify manufacturer needs and implement only those tools, which contribute maximum benefit to their needs. In this research a mathematical model is proposed for maximising the perceived value of manufacturer needs and developed a step-by-step methodology to select best performance metrics along with appropriate lean strategies within the budgetary constraints. With the help of a case study, the proposed model and method have been demonstrated.
Resumo:
Performance comparisons between File Signatures and Inverted Files for text retrieval have previously shown several significant shortcomings of file signatures relative to inverted files. The inverted file approach underpins most state-of-the-art search engine algorithms, such as Language and Probabilistic models. It has been widely accepted that traditional file signatures are inferior alternatives to inverted files. This paper describes TopSig, a new approach to the construction of file signatures. Many advances in semantic hashing and dimensionality reduction have been made in recent times, but these were not so far linked to general purpose, signature file based, search engines. This paper introduces a different signature file approach that builds upon and extends these recent advances. We are able to demonstrate significant improvements in the performance of signature file based indexing and retrieval, performance that is comparable to that of state of the art inverted file based systems, including Language models and BM25. These findings suggest that file signatures offer a viable alternative to inverted files in suitable settings and positions the file signatures model in the class of Vector Space retrieval models.
Resumo:
Circuit-breakers (CBs) are subject to electrical stresses with restrikes during capacitor bank operation. Stresses are caused by the overvoltages across CBs, the interrupting currents and the rate of rise of recovery voltage (RRRV). Such electrical stresses also depend on the types of system grounding and the types of dielectric strength curves. The aim of this study is to demonstrate a restrike waveform predictive model for a SF6 CB that considered the types of system grounding: grounded and non-grounded and the computation accuracy comparison on the application of the cold withstand dielectric strength and the hot recovery dielectric strength curve including the POW (point-on-wave) recommendations to make an assessment of increasing the CB remaining life. The simulation of SF6 CB stresses in a typical 400 kV system was undertaken and the results in the applications are presented. The simulated restrike waveforms produced with the identified features using wavelet transform can be used for restrike diagnostic algorithm development with wavelet transform to locate a substation with breaker restrikes. This study found that the hot withstand dielectric strength curve has less magnitude than the cold withstand dielectric strength curve for restrike simulation results. Computation accuracy improved with the hot withstand dielectric strength and POW controlled switching can increase the life for a SF6 CB.
Resumo:
Given global demand for new infrastructure, governments face substantial challenges in funding new infrastructure and simultaneously delivering Value for Money (VfM). The paper begins with an update on a key development in a new early/first-order procurement decision making model that deploys production cost/benefit theory and theories concerning transaction costs from the New Institutional Economics, in order to identify a procurement mode that is likely to deliver the best ratio of production costs and transaction costs to production benefits, and therefore deliver superior VfM relative to alternative procurement modes. In doing so, the new procurement model is also able to address the uncertainty concerning the relative merits of Public-Private Partnerships (PPP) and non-PPP procurement approaches. The main aim of the paper is to develop competition as a dependent variable/proxy for VfM and a hypothesis (overarching proposition), as well as developing a research method to test the new procurement model. Competition reflects both production costs and benefits (absolute level of competition) and transaction costs (level of realised competition) and is a key proxy for VfM. Using competition as a proxy for VfM, the overarching proposition is given as: When the actual procurement mode matches the predicted (theoretical) procurement mode (informed by the new procurement model), then actual competition is expected to match potential competition (based on actual capacity). To collect data to test this proposition, the research method that is developed in this paper combines a survey and case study approach. More specifically, data collection instruments for the surveys to collect data on actual procurement, actual competition and potential competition are outlined. Finally, plans for analysing this survey data are briefly mentioned, along with noting the planned use of analytical pattern matching in deploying the new procurement model and in order to develop the predicted (theoretical) procurement mode.
Resumo:
A model for drug diffusion from a spherical polymeric drug delivery device is considered. The model contains two key features. The first is that solvent diffuses into the polymer, which then transitions from a glassy to a rubbery state. The interface between the two states of polymer is modelled as a moving boundary, whose speed is governed by a kinetic law; the same moving boundary problem arises in the one-phase limit of a Stefan problem with kinetic undercooling. The second feature is that drug diffuses only through the rubbery region, with a nonlinear diffusion coefficient that depends on the concentration of solvent. We analyse the model using both formal asymptotics and numerical computation, the latter by applying a front-fixing scheme with a finite volume method. Previous results are extended and comparisons are made with linear models that work well under certain parameter regimes. Finally, a model for a multi-layered drug delivery device is suggested, which allows for more flexible control of drug release.
Resumo:
Car Following models have a critical role in all microscopic traffic simulation models. Current microscopic simulation models are unable to mimic the unsafe behaviour of drivers as most are based on presumptions about the safe behaviour of drivers. Gipps model is a widely used car following model embedded in different micro-simulation models. This paper examines the Gipps car following model to investigate ways of improving the model for safety studies application. The paper puts forward some suggestions to modify the Gipps model to improve its capabilities to simulate unsafe vehicle movements (vehicles with safety indicators below critical thresholds). The result of the paper is one step forward to facilitate assessing and predicting safety at motorways using microscopic simulation. NGSIM as a rich source of vehicle trajectory data for a motorway is used to extract its relatively risky events. Short following headways and Time To Collision are used to assess critical safety event within traffic flow. The result shows that the modified proposed car following to a certain extent predicts the unsafe trajectories with smaller error values than the generic Gipps model.
Resumo:
This paper establishes practical stability results for an important range of approximate discrete-time filtering problems involving mismatch between the true system and the approximating filter model. Using local consistency assumption, the practical stability established is in the sense of an asymptotic bound on the amount of bias introduced by the model approximation. Significantly, these practical stability results do not require the approximating model to be of the same model type as the true system. Our analysis applies to a wide range of estimation problems and justifies the common practice of approximating intractable infinite dimensional nonlinear filters by simpler computationally tractable filters.
Resumo:
This important work describes recent theoretical advances in the study of artificial neural networks. It explores probabilistic models of supervised learning problems, and addresses the key statistical and computational questions. Chapters survey research on pattern classification with binary-output networks, including a discussion of the relevance of the Vapnik Chervonenkis dimension, and of estimates of the dimension for several neural network models. In addition, Anthony and Bartlett develop a model of classification by real-output networks, and demonstrate the usefulness of classification with a "large margin." The authors explain the role of scale-sensitive versions of the Vapnik Chervonenkis dimension in large margin classification, and in real prediction. Key chapters also discuss the computational complexity of neural network learning, describing a variety of hardness results, and outlining two efficient, constructive learning algorithms. The book is self-contained and accessible to researchers and graduate students in computer science, engineering, and mathematics
Resumo:
We study model selection strategies based on penalized empirical loss minimization. We point out a tight relationship between error estimation and data-based complexity penalization: any good error estimate may be converted into a data-based penalty function and the performance of the estimate is governed by the quality of the error estimate. We consider several penalty functions, involving error estimates on independent test data, empirical VC dimension, empirical VC entropy, and margin-based quantities. We also consider the maximal difference between the error on the first half of the training data and the second half, and the expected maximal discrepancy, a closely related capacity estimate that can be calculated by Monte Carlo integration. Maximal discrepancy penalty functions are appealing for pattern classification problems, since their computation is equivalent to empirical risk minimization over the training data with some labels flipped.
Resumo:
Chlamydia trachomatis is a major cause of sexually transmitted diseases worldwide. There currently is no vaccine to protect against chlamydial infection of the female reproductive tract. Vaccine development has predominantly involved using the murine model, however infection of female guinea pigs with Chlamydia caviae more closely resembles chlamydial infection of the human female reproductive tract, and presents a better model to assess potential human chlamydial vaccines. We immunised female guinea pigs intranasally with recombinant major outer membrane protein (r-MOMP) combined with CpG-10109 and cholera toxin adjuvants. Both systemic and mucosal immune responses were elicited in immunised animals. MOMP-specific IgG and IgA were present in the vaginal mucosae, and high levels of MOMP-specific IgG were detected in the serum of immunised animals. Antibodies from the vaginal mucosae were also shown to be capable of neutralising C. caviae in vitro. Following immunisation, animals were challenged intravaginally with a live C. caviae infection of 102 inclusion forming units. We observed a decrease in duration of infection and a significant (p<0.025) reduction in infection load in r-MOMP immunised animals, compared to animals immunised with adjuvant only. Importantly, we also observed a marked reduction in upper reproductive tract (URT) pathology in r-MOMP immunised animals. Intranasal immunisation of female guinea pigs with r-MOMP was able to provide partial protection against C. caviae infection, not only by reducing chlamydial burden but also URT pathology. This data demonstrates the value of using the guinea pig model to evaluate potential chlamydial vaccines for protection against infection and disease pathology caused by C. trachomatis in the female reproductive tract.
Resumo:
We consider complexity penalization methods for model selection. These methods aim to choose a model to optimally trade off estimation and approximation errors by minimizing the sum of an empirical risk term and a complexity penalty. It is well known that if we use a bound on the maximal deviation between empirical and true risks as a complexity penalty, then the risk of our choice is no more than the approximation error plus twice the complexity penalty. There are many cases, however, where complexity penalties like this give loose upper bounds on the estimation error. In particular, if we choose a function from a suitably simple convex function class with a strictly convex loss function, then the estimation error (the difference between the risk of the empirical risk minimizer and the minimal risk in the class) approaches zero at a faster rate than the maximal deviation between empirical and true risks. In this paper, we address the question of whether it is possible to design a complexity penalized model selection method for these situations. We show that, provided the sequence of models is ordered by inclusion, in these cases we can use tight upper bounds on estimation error as a complexity penalty. Surprisingly, this is the case even in situations when the difference between the empirical risk and true risk (and indeed the error of any estimate of the approximation error) decreases much more slowly than the complexity penalty. We give an oracle inequality showing that the resulting model selection method chooses a function with risk no more than the approximation error plus a constant times the complexity penalty.
Resumo:
This paper establishes a practical stability result for discrete-time output feedback control involving mismatch between the exact system to be stabilised and the approximating system used to design the controller. The practical stability is in the sense of an asymptotic bound on the amount of error bias introduced by the model approximation, and is established using local consistency properties of the systems. Importantly, the practical stability established here does not require the approximating system to be of the same model type as the exact system. Examples are presented to illustrate the nature of our practical stability result.
Resumo:
Raman spectroscopy has been used to study vanadates in the solid state. The molecular structure of the vanadate minerals vésigniéite [BaCu3(VO4)2(OH)2] and volborthite [Cu3V2O7(OH)2·2H2O] have been studied by Raman spectroscopy and infrared spectroscopy. The spectra are related to the structure of the two minerals. The Raman spectrum of vésigniéite is characterized by two intense bands at 821 and 856 cm−1 assigned to ν1 (VO4)3− symmetric stretching modes. A series of infrared bands at 755, 787 and 899 cm−1 are assigned to the ν3 (VO4)3− antisymmetric stretching vibrational mode. Raman bands at 307 and 332 cm−1 and at 466 and 511 cm−1 are assigned to the ν2 and ν4 (VO4)3− bending modes. The Raman spectrum of volborthite is characterized by the strong band at 888 cm−1, assigned to the ν1 (VO3) symmetric stretching vibrations. Raman bands at 858 and 749 cm−1 are assigned to the ν3 (VO3) antisymmetric stretching vibrations; those at 814 cm−1 to the ν3 (VOV) antisymmetric vibrations; that at 508 cm−1 to the ν1 (VOV) symmetric stretching vibration and those at 442 and 476 cm−1 and 347 and 308 cm−1 to the ν4 (VO3) and ν2 (VO3) bending vibrations, respectively. The spectra of vésigniéite and volborthite are similar, especially in the region of skeletal vibrations, even though their crystal structures differ.