15 resultados para Optimal Sampling Time
em Aston University Research Archive
Resumo:
Implementation of a Monte Carlo simulation for the solution of population balance equations (PBEs) requires choice of initial sample number (N0), number of replicates (M), and number of bins for probability distribution reconstruction (n). It is found that Squared Hellinger Distance, H2, is a useful measurement of the accuracy of Monte Carlo (MC) simulation, and can be related directly to N0, M, and n. Asymptotic approximations of H2 are deduced and tested for both one-dimensional (1-D) and 2-D PBEs with coalescence. The central processing unit (CPU) cost, C, is found in a power-law relationship, C= aMNb0, with the CPU cost index, b, indicating the weighting of N0 in the total CPU cost. n must be chosen to balance accuracy and resolution. For fixed n, M × N0 determines the accuracy of MC prediction; if b > 1, then the optimal solution strategy uses multiple replications and small sample size. Conversely, if 0 < b < 1, one replicate and a large initial sample size is preferred. © 2015 American Institute of Chemical Engineers AIChE J, 61: 2394–2402, 2015
Resumo:
We propose a high-resolution optical time domain reflectometry (OTDR) based on an all-fiber supercontinuum source. The source simply consists of a laser with moderate power and a section of fiber which has a zero dispersion wavelength near the laser's central wavelength. Spectrum and time domain properties of the source are investigated, showing that the source has great capability in nonlinear optics, such as correlation OTDR. We analyze one of the key factors limiting the operational range of such an OTDR, i.e., sampling time. Finally, we experimentally demonstrate a correlation OTDR with 25km sensing range and 5.3cm spatial resolution, as a verification of theoretical analysis.
Resumo:
Although muscle atrophy is common to a number of disease states there is incomplete knowledge of the cellular mechanisms involved. In this study murine myotubes were treated with the phorbol ester 12-0-tetradecanoylphorbol-13-acetate (TPA) to evaluate the role of protein kinase C (PKC) as an upstream intermediate in protein degradation. TPA showed a parabolic dose-response curve for the induction of total protein degradation, with an optimal effect at a concentration of 25 nM, and an optimal incubation time of 3 h. Protein degradation was attenuated by co-incubation with the proteasome inhibitor lactacystin (5 μM), suggesting that it was mediated through the ubiquitin-proteasome proteolytic pathway. TPA induced an increased expression and activity of the ubiquitin-proteasome pathway, as evidenced by an increased functional activity, and increased expression of the 20S proteasome α-subunits, the 19S subunits MSS1 and p42, as well as the ubiquitin conjugating enzyme E214k, also with a maximal effect at a concentration of 25 nM and with a 3 h incubation time. There was also a reciprocal decrease in the cellular content of the myofibrillar protein myosin. TPA induced activation of PKC maximally at a concentration of 25 nM and this effect was attenuated by the PKC inhibitor calphostin C (300 nM), as was also total protein degradation. These results suggest that stimulation of PKC in muscle cells initiates protein degradation through the ubiquitin-proteasome pathway. TPA also induced degradation of the inhibitory protein, I-κBα, and increased nuclear accumulation of nuclear factor-κB (NF-κB) at the same time and concentrations as those inducing proteasome expression. In addition inhibition of NF-κB activation by resveratrol (30 μM) attenuated protein degradation induced by TPA. These results suggest that the induction of proteasome expression by TPA may involve the transcription factor NF-κB. © 2005 Elsevier Inc. All rights reserved.
Resumo:
This study examines the relationship between morningness-eveningness orientation and time-of day on attitude change, and tests the hypothesis that people will be more persuaded when tested at their optimal time-of-day (i.e., morning for M-types and evening for E-types) than non-optimal time-of-day (i.e., evening for M-Types and morning for E-types). Two hundred and twenty participants read a message that contained either strong vs. weak quality counter-attitudinal arguments (anti-voluntary euthanasia) in the morning (9.00. a.m.) or in the evening (7.00. p.m.). When tested at their respective optimal time-of-day (for both M- and E-types) there was a reliable difference in attitude change between the strong vs. weak messages (indicating message processing had occurred) while there was no difference between strong vs. weak messages when tested at their non-optimal time-of-day. In addition, the amount of message-congruent thinking mediated the attitude change. The results show that M- and E-types pay greater attention to and elaborate on a persuasive message at their optimal time-of-day, and this leads to increased attitude change, compared to those tested at their non-optimal time-of-day. © 2012.
Resumo:
PURPOSE: To examine the optimum time at which fluorescein patterns of gas permeable lenses (GPs) should be evaluated. METHODS: Aligned, 0.2mm steep and 0.2mm flat GPs were fitted to 17 patients (aged 20.6±1.1 years, 10 male). Fluorescein was applied to their upper temporal bulbar conjunctiva with a moistened fluorescein strip. Digital slit lamp images (CSO, Italy) at 10× magnification of the fluorescein pattern viewed with blue light through a yellow filter were captured every 15s. Fluorescein intensity in central, mid peripheral and edge regions of the superior, inferior, temporal and nasal quadrants of the lens were graded subjectively using a +2 to -2 scale and using ImageJ software on the simultaneously captured images. RESULTS: Subjectively graded and objectively image analysed fluorescein intensity changed with time (p<0.001), lens region (centre, mid-periphery and edge: p<0.05) and there was interaction between lens region with lens fit (p<0.001). For edge band width, there was a significant effect of time (F=118.503, p<0.001) and lens fit (F=5.1249, p=0.012). The expected alignment, flat and steep fitting patterns could be seen from approximately after 30 to 180s subjectively and 15 to 105s in captured images. CONCLUSION: Although the stability of fluorescein intensity can start to decline in as little as 45s post fluorescein instillation, the diagnostic pattern of alignment, steep or flat fit is seen in each meridian by subjective observation from about 30s to 3min indicating this is the most appropriate time window to evaluate GP lenses in clinical practice.
Resumo:
We present a framework for calculating globally optimal parameters, within a given time frame, for on-line learning in multilayer neural networks. We demonstrate the capability of this method by computing optimal learning rates in typical learning scenarios. A similar treatment allows one to determine the relevance of related training algorithms based on modifications to the basic gradient descent rule as well as to compare different training methods.
Resumo:
A method for calculating the globally optimal learning rate in on-line gradient-descent training of multilayer neural networks is presented. The method is based on a variational approach which maximizes the decrease in generalization error over a given time frame. We demonstrate the method by computing optimal learning rates in typical learning scenarios. The method can also be employed when different learning rates are allowed for different parameter vectors as well as to determine the relevance of related training algorithms based on modifications to the basic gradient descent rule.
Resumo:
It is indisputable that printed circuit boards (PCBs) play a vital role in our daily lives. With the ever-increasing applications of PCBs, one of the crucial ways to increase a PCB manufacturer’s competitiveness in terms of operation efficiency is to minimize the production time so that the products can be introduced to the market sooner. Optimal Production Planning for PCB Assembly is the first book to focus on the optimization of the PCB assembly lines’ efficiency. This is done by: • integrating the component sequencing and the feeder arrangement problems together for both the pick-and-place machine and the chip shooter machine; • constructing mathematical models and developing an efficient and effective heuristic solution approach for the integrated problems for both types of placement machines, the line assignment problem, and the component allocation problem; and • developing a prototype of the PCB assembly planning system. The techniques proposed in Optimal Production Planning for PCB Assembly will enable process planners in the electronics manufacturing industry to improve the assembly line’s efficiency in their companies. Graduate students in operations research can familiarise themselves with the techniques and the applications of mathematical modeling after reading this advanced introduction to optimal production planning for PCB assembly.
Resumo:
This work reports the developnent of a mathenatical model and distributed, multi variable computer-control for a pilot plant double-effect climbing-film evaporator. A distributed-parameter model of the plant has been developed and the time-domain model transformed into the Laplace domain. The model has been further transformed into an integral domain conforming to an algebraic ring of polynomials, to eliminate the transcendental terms which arise in the Laplace domain due to the distributed nature of the plant model. This has made possible the application of linear control theories to a set of linear-partial differential equations. The models obtained have well tracked the experimental results of the plant. A distributed-computer network has been interfaced with the plant to implement digital controllers in a hierarchical structure. A modern rnultivariable Wiener-Hopf controller has been applled to the plant model. The application has revealed a limitation condition that the plant matrix should be positive-definite along the infinite frequency axis. A new multi variable control theory has emerged fram this study, which avoids the above limitation. The controller has the structure of the modern Wiener-Hopf controller, but with a unique feature enabling a designer to specify the closed-loop poles in advance and to shape the sensitivity matrix as required. In this way, the method treats directly the interaction problems found in the chemical processes with good tracking and regulation performances. Though the ability of the analytical design methods to determine once and for all whether a given set of specifications can be met is one of its chief advantages over the conventional trial-and-error design procedures. However, one disadvantage that offsets to some degree the enormous advantages is the relatively complicated algebra that must be employed in working out all but the simplest problem. Mathematical algorithms and computer software have been developed to treat some of the mathematical operations defined over the integral domain, such as matrix fraction description, spectral factorization, the Bezout identity, and the general manipulation of polynomial matrices. Hence, the design problems of Wiener-Hopf type of controllers and other similar algebraic design methods can be easily solved.
Resumo:
Interpolated data are an important part of the environmental information exchange as many variables can only be measured at situate discrete sampling locations. Spatial interpolation is a complex operation that has traditionally required expert treatment, making automation a serious challenge. This paper presents a few lessons learnt from INTAMAP, a project that is developing an interoperable web processing service (WPS) for the automatic interpolation of environmental data using advanced geostatistics, adopting a Service Oriented Architecture (SOA). The “rainbow box” approach we followed provides access to the functionality at a whole range of different levels. We show here how the integration of open standards, open source and powerful statistical processing capabilities allows us to automate a complex process while offering users a level of access and control that best suits their requirements. This facilitates benchmarking exercises as well as the regular reporting of environmental information without requiring remote users to have specialized skills in geostatistics.
Resumo:
The ERS-1 Satellite was launched in July 1991 by the European Space Agency into a polar orbit at about 800 km, carrying a C-band scatterometer. A scatterometer measures the amount of backscatter microwave radiation reflected by small ripples on the ocean surface induced by sea-surface winds, and so provides instantaneous snap-shots of wind flow over large areas of the ocean surface, known as wind fields. Inherent in the physics of the observation process is an ambiguity in wind direction; the scatterometer cannot distinguish if the wind is blowing toward or away from the sensor device. This ambiguity implies that there is a one-to-many mapping between scatterometer data and wind direction. Current operational methods for wind field retrieval are based on the retrieval of wind vectors from satellite scatterometer data, followed by a disambiguation and filtering process that is reliant on numerical weather prediction models. The wind vectors are retrieved by the local inversion of a forward model, mapping scatterometer observations to wind vectors, and minimising a cost function in scatterometer measurement space. This thesis applies a pragmatic Bayesian solution to the problem. The likelihood is a combination of conditional probability distributions for the local wind vectors given the scatterometer data. The prior distribution is a vector Gaussian process that provides the geophysical consistency for the wind field. The wind vectors are retrieved directly from the scatterometer data by using mixture density networks, a principled method to model multi-modal conditional probability density functions. The complexity of the mapping and the structure of the conditional probability density function are investigated. A hybrid mixture density network, that incorporates the knowledge that the conditional probability distribution of the observation process is predominantly bi-modal, is developed. The optimal model, which generalises across a swathe of scatterometer readings, is better on key performance measures than the current operational model. Wind field retrieval is approached from three perspectives. The first is a non-autonomous method that confirms the validity of the model by retrieving the correct wind field 99% of the time from a test set of 575 wind fields. The second technique takes the maximum a posteriori probability wind field retrieved from the posterior distribution as the prediction. For the third technique, Markov Chain Monte Carlo (MCMC) techniques were employed to estimate the mass associated with significant modes of the posterior distribution, and make predictions based on the mode with the greatest mass associated with it. General methods for sampling from multi-modal distributions were benchmarked against a specific MCMC transition kernel designed for this problem. It was shown that the general methods were unsuitable for this application due to computational expense. On a test set of 100 wind fields the MAP estimate correctly retrieved 72 wind fields, whilst the sampling method correctly retrieved 73 wind fields.
Resumo:
This thesis makes a contribution to the Change Data Capture (CDC) field by providing an empirical evaluation on the performance of CDC architectures in the context of realtime data warehousing. CDC is a mechanism for providing data warehouse architectures with fresh data from Online Transaction Processing (OLTP) databases. There are two types of CDC architectures, pull architectures and push architectures. There is exiguous data on the performance of CDC architectures in a real-time environment. Performance data is required to determine the real-time viability of the two architectures. We propose that push CDC architectures are optimal for real-time CDC. However, push CDC architectures are seldom implemented because they are highly intrusive towards existing systems and arduous to maintain. As part of our contribution, we pragmatically develop a service based push CDC solution, which addresses the issues of intrusiveness and maintainability. Our solution uses Data Access Services (DAS) to decouple CDC logic from the applications. A requirement for the DAS is to place minimal overhead on a transaction in an OLTP environment. We synthesize DAS literature and pragmatically develop DAS that eciently execute transactions in an OLTP environment. Essentially we develop effeicient RESTful DAS, which expose Transactions As A Resource (TAAR). We evaluate the TAAR solution and three pull CDC mechanisms in a real-time environment, using the industry recognised TPC-C benchmark. The optimal CDC mechanism in a real-time environment, will capture change data with minimal latency and will have a negligible affect on the database's transactional throughput. Capture latency is the time it takes a CDC mechanism to capture a data change that has been applied to an OLTP database. A standard definition for capture latency and how to measure it does not exist in the field. We create this definition and extend the TPC-C benchmark to make the capture latency measurement. The results from our evaluation show that pull CDC is capable of real-time CDC at low levels of user concurrency. However, as the level of user concurrency scales upwards, pull CDC has a significant impact on the database's transaction rate, which affirms the theory that pull CDC architectures are not viable in a real-time architecture. TAAR CDC on the other hand is capable of real-time CDC, and places a minimal overhead on the transaction rate, although this performance is at the expense of CPU resources.
Resumo:
We have investigated how optimal coding for neural systems changes with the time available for decoding. Optimization was in terms of maximizing information transmission. We have estimated the parameters for Poisson neurons that optimize Shannon transinformation with the assumption of rate coding. We observed a hierarchy of phase transitions from binary coding, for small decoding times, toward discrete (M-ary) coding with two, three and more quantization levels for larger decoding times. We postulate that the presence of subpopulations with specific neural characteristics could be a signiture of an optimal population coding scheme and we use the mammalian auditory system as an example.
Resumo:
The popularity of online social media platforms provides an unprecedented opportunity to study real-world complex networks of interactions. However, releasing this data to researchers and the public comes at the cost of potentially exposing private and sensitive user information. It has been shown that a naive anonymization of a network by removing the identity of the nodes is not sufficient to preserve users’ privacy. In order to deal with malicious attacks, k -anonymity solutions have been proposed to partially obfuscate topological information that can be used to infer nodes’ identity. In this paper, we study the problem of ensuring k anonymity in time-varying graphs, i.e., graphs with a structure that changes over time, and multi-layer graphs, i.e., graphs with multiple types of links. More specifically, we examine the case in which the attacker has access to the degree of the nodes. The goal is to generate a new graph where, given the degree of a node in each (temporal) layer of the graph, such a node remains indistinguishable from other k-1 nodes in the graph. In order to achieve this, we find the optimal partitioning of the graph nodes such that the cost of anonymizing the degree information within each group is minimum. We show that this reduces to a special case of a Generalized Assignment Problem, and we propose a simple yet effective algorithm to solve it. Finally, we introduce an iterated linear programming approach to enforce the realizability of the anonymized degree sequences. The efficacy of the method is assessed through an extensive set of experiments on synthetic and real-world graphs.
Resumo:
In this paper, we focus on the design of bivariate EDAs for discrete optimization problems and propose a new approach named HSMIEC. While the current EDAs require much time in the statistical learning process as the relationships among the variables are too complicated, we employ the Selfish gene theory (SG) in this approach, as well as a Mutual Information and Entropy based Cluster (MIEC) model is also set to optimize the probability distribution of the virtual population. This model uses a hybrid sampling method by considering both the clustering accuracy and clustering diversity and an incremental learning and resample scheme is also set to optimize the parameters of the correlations of the variables. Compared with several benchmark problems, our experimental results demonstrate that HSMIEC often performs better than some other EDAs, such as BMDA, COMIT, MIMIC and ECGA. © 2009 Elsevier B.V. All rights reserved.