36 resultados para chains with unbounded variable length memory
em Aston University Research Archive
Resumo:
We investigate a mixed problem with variable lateral conditions for the heat equation that arises in modelling exocytosis, i.e. the opening of a cell boundary in specific biological species for the release of certain molecules to the exterior of the cell. The Dirichlet condition is imposed on a surface patch of the boundary and this patch is occupying a larger part of the boundary as time increases modelling where the cell is opening (the fusion pore), and on the remaining part, a zero Neumann condition is imposed (no molecules can cross this boundary). Uniform concentration is assumed at the initial time. We introduce a weak formulation of this problem and show that there is a unique weak solution. Moreover, we give an asymptotic expansion for the behaviour of the solution near the opening point and for small values in time. We also give an integral equation for the numerical construction of the leading term in this expansion.
Resumo:
Relationships between clustering, description length, and regularisation are pointed out, motivating the introduction of a cost function with a description length interpretation and the unusual and useful property of having its minimum approximated by the densest mode of a distribution. A simple inverse kinematics example is used to demonstrate that this property can be used to select and learn one branch of a multi-valued mapping. This property is also used to develop a method for setting regularisation parameters according to the scale on which structure is exhibited in the training data. The regularisation technique is demonstrated on two real data sets, a classification problem and a regression problem.
Resumo:
Purpose. To use anterior segment optical coherence tomography (AS-OCT) to analyze ciliary muscle morphology and changes with accommodation and axial ametropia. Methods. Fifty prepresbyopic volunteers, aged 19 to 34 years were recruited. High-resolution images were acquired of nasal and temporal ciliary muscles in the relaxed state and at stimulus vergence levels of -4 and -8 D. Objective accommodative responses and axial lengths were also recorded. Two-way, mixed-factor analyses of variance (ANOVAs) were used to assess the changes in ciliary muscle parameters with accommodation and determine whether these changes are dependent on the nasal–temporal aspect or axial length, whereas linear regression analysis was used to analyze the relationship between axial length and ciliary muscle length. Results. The ciliary muscle was longer (r = 0.34, P = 0.02), but not significantly thicker (F = 2.84, P = 0.06), in eyes with greater axial length. With accommodation, the ciliary muscle showed a contractile shortening (F = 42.9. P < 0.001), particularly anteriorly (F = 177.2, P < 0.001), and a thickening of the anterior portion (F= 46.2, P < 0.001). The ciliary muscle was thicker (F = 17.8, P < 0.001) and showed a greater contractile response on the temporal side. Conclusions. The accommodative changes observed support an anterior, as well as centripetal, contractile shift of ciliary muscle mass.
Resumo:
Conventional feed forward Neural Networks have used the sum-of-squares cost function for training. A new cost function is presented here with a description length interpretation based on Rissanen's Minimum Description Length principle. It is a heuristic that has a rough interpretation as the number of data points fit by the model. Not concerned with finding optimal descriptions, the cost function prefers to form minimum descriptions in a naive way for computational convenience. The cost function is called the Naive Description Length cost function. Finding minimum description models will be shown to be closely related to the identification of clusters in the data. As a consequence the minimum of this cost function approximates the most probable mode of the data rather than the sum-of-squares cost function that approximates the mean. The new cost function is shown to provide information about the structure of the data. This is done by inspecting the dependence of the error to the amount of regularisation. This structure provides a method of selecting regularisation parameters as an alternative or supplement to Bayesian methods. The new cost function is tested on a number of multi-valued problems such as a simple inverse kinematics problem. It is also tested on a number of classification and regression problems. The mode-seeking property of this cost function is shown to improve prediction in time series problems. Description length principles are used in a similar fashion to derive a regulariser to control network complexity.
Resumo:
A local area network that can support both voice and data packets offers economic advantages due to the use of only a single network for both types of traffic, greater flexibility to changing user demands, and it also enables efficient use to be made of the transmission capacity. The latter aspect is very important in local broadcast networks where the capacity is a scarce resource, for example mobile radio. This research has examined two types of local broadcast network, these being the Ethernet-type bus local area network and a mobile radio network with a central base station. With such contention networks, medium access control (MAC) protocols are required to gain access to the channel. MAC protocols must provide efficient scheduling on the channel between the distributed population of stations who want to transmit. No access scheme can exceed the performance of a single server queue, due to the spatial distribution of the stations. Stations cannot in general form a queue without using part of the channel capacity to exchange protocol information. In this research, several medium access protocols have been examined and developed in order to increase the channel throughput compared to existing protocols. However, the established performance measures of average packet time delay and throughput cannot adequately characterise protocol performance for packet voice. Rather, the percentage of bits delivered within a given time bound becomes the relevant performance measure. Performance evaluation of the protocols has been examined using discrete event simulation and in some cases also by mathematical modelling. All the protocols use either implicit or explicit reservation schemes, with their efficiency dependent on the fact that many voice packets are generated periodically within a talkspurt. Two of the protocols are based on the existing 'Reservation Virtual Time CSMA/CD' protocol, which forms a distributed queue through implicit reservations. This protocol has been improved firstly by utilising two channels, a packet transmission channel and a packet contention channel. Packet contention is then performed in parallel with a packet transmission to increase throughput. The second protocol uses variable length packets to reduce the contention time between transmissions on a single channel. A third protocol developed, is based on contention for explicit reservations. Once a station has achieved a reservation, it maintains this effective queue position for the remainder of the talkspurt and transmits after it has sensed the transmission from the preceeding station within the queue. In the mobile radio environment, adaptions to the protocols were necessary in order that their operation was robust to signal fading. This was achieved through centralised control at a base station, unlike the local area network versions where the control was distributed at the stations. The results show an improvement in throughput compared to some previous protocols. Further work includes subjective testing to validate the protocols' effectiveness.
Resumo:
We study memory effects in a kinetic roughening model. For d=1, a different dynamic scaling is uncovered in the memory dominated phases; the Kardar-Parisi-Zhang scaling is restored in the absence of noise. dc=2 represents the critical dimension where memory is shown to smoothen the roughening front (a=0). Studies on a discrete atomistic model in the same universality class reconfirm the analytical results in the large time limit, while a different scaling behavior shows up for t
Resumo:
Background - Modelling the interaction between potentially antigenic peptides and Major Histocompatibility Complex (MHC) molecules is a key step in identifying potential T-cell epitopes. For Class II MHC alleles, the binding groove is open at both ends, causing ambiguity in the positional alignment between the groove and peptide, as well as creating uncertainty as to what parts of the peptide interact with the MHC. Moreover, the antigenic peptides have variable lengths, making naive modelling methods difficult to apply. This paper introduces a kernel method that can handle variable length peptides effectively by quantifying similarities between peptide sequences and integrating these into the kernel. Results - The kernel approach presented here shows increased prediction accuracy with a significantly higher number of true positives and negatives on multiple MHC class II alleles, when testing data sets from MHCPEP [1], MCHBN [2], and MHCBench [3]. Evaluation by cross validation, when segregating binders and non-binders, produced an average of 0.824 AROC for the MHCBench data sets (up from 0.756), and an average of 0.96 AROC for multiple alleles of the MHCPEP database. Conclusion - The method improves performance over existing state-of-the-art methods of MHC class II peptide binding predictions by using a custom, knowledge-based representation of peptides. Similarity scores, in contrast to a fixed-length, pocket-specific representation of amino acids, provide a flexible and powerful way of modelling MHC binding, and can easily be applied to other dynamic sequence problems.
Resumo:
We examined methods of controlling the pulse duration, spectral width and wavelength of the output from an all-fiber Yb laser mode-locked by carbon nanotubes. It is shown that a segment of polarization maintaining (PM) fiber inserted into a standard single mode fiber based laser cavity can function as a spectral selective filter. Adjustment of the length of the PM fiber from 1 to 2 m led to a corresponding variation in the pulse duration from 2 to 3.8 ps, the spectral bandwidth of the laser output changes from 0.15 to 1.26 nm. Laser output wavelength detuning within up to 5 nm was demonstrated with a fixed length of the PM fiber by adjustment of the polarization controller. © 2012 Optical Society of America.
Resumo:
This research focuses on automatically adapting a search engine size in response to fluctuations in query workload. Deploying a search engine in an Infrastructure as a Service (IaaS) cloud facilitates allocating or deallocating computer resources to or from the engine. Our solution is to contribute an adaptive search engine that will repeatedly re-evaluate its load and, when appropriate, switch over to a dierent number of active processors. We focus on three aspects and break them out into three sub-problems as follows: Continually determining the Number of Processors (CNP), New Grouping Problem (NGP) and Regrouping Order Problem (ROP). CNP means that (in the light of the changes in the query workload in the search engine) there is a problem of determining the ideal number of processors p active at any given time to use in the search engine and we call this problem CNP. NGP happens when changes in the number of processors are determined and it must also be determined which groups of search data will be distributed across the processors. ROP is how to redistribute this data onto processors while keeping the engine responsive and while also minimising the switchover time and the incurred network load. We propose solutions for these sub-problems. For NGP we propose an algorithm for incrementally adjusting the index to t the varying number of virtual machines. For ROP we present an ecient method for redistributing data among processors while keeping the search engine responsive. Regarding the solution for CNP, we propose an algorithm determining the new size of the search engine by re-evaluating its load. We tested the solution performance using a custom-build prototype search engine deployed in the Amazon EC2 cloud. Our experiments show that when we compare our NGP solution with computing the index from scratch, the incremental algorithm speeds up the index computation 2{10 times while maintaining a similar search performance. The chosen redistribution method is 25% to 50% faster than other methods and reduces the network load around by 30%. For CNP we present a deterministic algorithm that shows a good ability to determine a new size of search engine. When combined, these algorithms give an adapting algorithm that is able to adjust the search engine size with a variable workload.
Resumo:
Sequence specificity of antibodies to UV-damaged DNA has not been described previously. The antisera investigated here were specific for UV-modified DNA and were absolutely dependent upon the presence of thymine residues. Using a series of oligonucleotides in competition ELISA, increased inhibition was observed with increasing chain length of UV-polythymidylate. A minimum of three adjacent thymines was required for effective inhibition; alone, dimers of thymine were poor antigens. Although UV-irradiated poly(dC) was not antigenic, cytosines could partially replace thymines within the smallest effective epitope (T-T-T) with a high degree of sequence specificity, not previously described. The main epitope induced by UV was formed from adjacent thymines and either a 3' or a 5' pyrimidine.
Resumo:
Loss of coolant accidents (LOCA) in the primary cooling circuit of a nuclear reactor may result in damage to insulation materials that are located near to the leak. The insulation materials released may compromise the operation of the emergency core cooling system (ECCS). Insulation material in the form of mineral wool fibre agglomerates (MWFA) maybe transported to the containment sump strainers mounted at the inlet of the emergency cooling pumps, where the insulation fibres may block or penetrate the strainers. In addition to the impact of MWFA on the pressure drop across the strainers, corrosion products formed over time may also accumulate in the fibre cakes on the strainers, which can lead to a significant increase in the strainer pressure drop and result in cavitation in the ECCS. Thus, knowledge of transport characteristics of the damaged insulation materials in various scenarios is required to help plan for the long-term operability of nuclear reactors, which undergo LOCA. An experimental and theoretical study performed by the Helmholtz-Zentrum Dresden-Rossendorf and the Hochschule Zittau/Görlitz1 is investigating the phenomena that maybe observed in the containment vessel during a LOCA. The study entails the generation of fibre agglomerates, the determination of their transport properties in single and multi-effect experiments and the long-term effect that corrosion of the containment internals by the coolant has on the strainer pressure drop. The focus of this presentation is on the experiments performed that characterize the horizontal transport of MWFA, whereas the corresponding CFD simulations are described in an accompanying contribution (see abstract of Cartland Glover et al.). The experiments were performed a racetrack type channel that provided a near uniform horizontal flow. The channel is 0.1 wide by 1.2 m high with a straight length of 5 m and two bends of 0.5 m. The measurement techniques include particle imaging (both wide-angle and macro lens), concurrent particle image velocimetry, ultravelocimetry, laser detection sensors to sense the presence of absence of MWFA and pertinent measurements of the MWFA concentration and quiescent settling characteristics. The transport of the MWFA was observed at velocities of 0.1 and 0.25 m s-1 to verify numerical model behaviour in and just beyond expected velocities in the containment sump of a nuclear reactor.
Resumo:
In vivo, neurons of the globus pallidus (GP) and subthalamic nucleus (STN) resonate independently around 70 Hz. However, on the loss of dopamine as in Parkinson's disease, there is a switch to a lower frequency of firing with increased bursting and synchronization of activity. In vitro, type A neurons of the GP, identified by the presence of Ih and rebound depolarizations, fire at frequencies (≤80 Hz) in response to glutamate pressure ejection, designed to mimic STN input. The profile of this frequency response was unaltered by bath application of the GABAA antagonist bicuculline (10 μM), indicating the lack of involvement of a local GABA neuronal network, while cross-correlations of neuronal pairs revealed uncorrelated activity or phase-locked activity with a variable phase delay, consistent with each GP neuron acting as an independent oscillator. This autonomy of firing appears to arise due to the presence of intrinsic voltage- and sodium-dependent subthreshold membrane oscillations. GABAA inhibitory postsynaptic potentials are able to disrupt this tonic activity while promoting a rebound depolarization and action potential firing. This rebound is able to reset the phase of the intrinsic oscillation and provides a mechanism for promoting coherent firing activity in ensembles of GP neurons that may ultimately lead to abnormal and pathological disorders of movement.
Resumo:
Whilst research on work group diversity has proliferated in recent years, relatively little attention has been paid to the precise definition of diversity or its measurement. One of the few studies to do so is Harrison and Klein’s (2007) typology, which defined three types of diversity – separation, variety and disparity – and suggested possible indices with which they should be measured. However, their typology is limited by its association of diversity types with variable measurement, by a lack of clarity over the meaning of variety, and by the absence of a clear guidance about which diversity index should be employed. In this thesis I develop an extended version of the typology, including four diversity types (separation, range, spread and disparity), and propose specific indices to be used for each type of diversity with each variable type (ratio, interval, ordinal and nominal). Indices are chosen or derived from first principles based on the precise definition of the diversity type. I then test the usefulness of these indices in predicting outcomes of diversity compared with other indices, using both an extensive simulated data set (to estimate the effects of mis-specification of diversity type or index) and eight real data sets (to examine whether the proposed indices produce the strongest relationships with hypothesised outcomes). The analyses lead to the conclusion that the indices proposed in the typology are at least as good as, and usually better than, other indices in terms of both measuring effect sizes and power to find significant results, and thus provide evidence to support the typology. Implications for theory and methodology are discussed.
Resumo:
In this chapter, we discuss the interviewing of adult witnesses and victims with reference to how the extant psychological and linguistic literature has contributed to understanding and informing interview practice over the past 20 years and how it continues to support practical and procedural improvements. We have only scratched the surface of this important and complex topic, but throughout this chapter we have directed readers to many in-depth reviews and some of the most contemporary research literature currently available in this domain. We have introduced the PEACE model and described the Cognitive Interview procedure and its development. We have also discussed rapport building, question types and communication style, all with reference to witness memory and practical interviewing. Finally, we highlight areas that would benefit from research, for example conducting interviews with interpreters, and how new training initiatives are seeking to improve interview procedures and interviewer practice.
Resumo:
By transforming the optical fiber span into an ultralong cavity laser, we experimentally demonstrate quasilossless transmission over long (up to 75 km) distances and virtually zero signal power variation over shorter (up to 20 km) spans, opening the way for the practical implementation of integrable nonlinear systems in optical fiber. As a by-product of our technique, the longest ever laser (to the best of our knowledge) has been implemented, with a cavity length of 75 km. A simple theory of the lossless fiber span, in excellent agreement with the observed results, is presented.