957 resultados para chains with unbounded variable length memory
Resumo:
The role of the therapeutic drug monitoring laboratory in support of immunosuppressant drug therapy is well established, and the introduction of sirolimus (SRL) is a new direction in this field. The lack of an immunoassay for several years has restricted the availability of SRL assay services. The recent availability of a CEDIA (R) SRL assay has the potential to improve this situation. The present communication has compared the CEDIA (R) SRL method with 2 established chromatographic methods, HPLC-UV and HPLC-MS/MS. The CEDIA (R) method, run on a Hitachi 917 analyzer, showed acceptable validation criteria with within-assay precision of 9.1% and 3.3%, and bias of 17.1% and 5.8%, at SRL concentrations of 5.0 mu g/L and 20 mu g/L, respectively. The corresponding between-run precision values were 11.5% and 3.3% and bias of 7.1% and 2.9% at 5.0 mu g/L and 20 mu g/L, respectively, The lower limit of quantification was found to be 3.0 mu g/L. A series of 96 EDTA whole-blood samples predominantly from renal transplant recipients were assayed by the 3 methods for comparison. It was found that the CEDIA (R) method showed a Deming regression line of CEDIA = 1.20 X HPLC-MS/MS - 0.07 (r = 0.934, SEE = 1.47), with a mean bias of 20.4%. Serial blood samples from 8 patients included in this evaluation showed that the CEDIA (R) method reflected the clinical fluctuations in the chromatographic methods, albeit with the variable bias noted. The CEDIA (R) method on the H917 analyzer is therefore a useful adjunct to SRL dosage individualization in renal transplant recipients.
Resumo:
Fine-scale spatial genetic structure (SGS) in natural tree populations is largely a result of restricted pollen and seed dispersal. Understanding the link between limitations to dispersal in gene vectors and SGS is of key interest to biologists and the availability of highly variable molecular markers has facilitated fine-scale analysis of populations. However, estimation of SGS may depend strongly on the type of genetic marker and sampling strategy (of both loci and individuals). To explore sampling limits, we created a model population with simulated distributions of dominant and codominant alleles, resulting from natural regeneration with restricted gene flow. SGS estimates from subsamples (simulating collection and analysis with amplified fragment length polymorphism (AFLP) and microsatellite markers) were correlated with the 'real' estimate (from the full model population). For both marker types, sampling ranges were evident, with lower limits below which estimation was poorly correlated and upper limits above which sampling became inefficient. Lower limits (correlation of 0.9) were 100 individuals, 10 loci for microsatellites and 150 individuals, 100 loci for AFLPs. Upper limits were 200 individuals, five loci for microsatellites and 200 individuals, 100 loci for AFLPs. The limits indicated by simulation were compared with data sets from real species. Instances where sampling effort had been either insufficient or inefficient were identified. The model results should form practical boundaries for studies aiming to detect SGS. However, greater sample sizes will be required in cases where SGS is weaker than for our simulated population, for example, in species with effective pollen/seed dispersal mechanisms.
Resumo:
The presence and location of intramolecular disulphide bonds are a key determinant of the structure and function of proteins. Intramolecular disulphide bonds in proteins have previously been analyzed under the assumption that there is no clear relationship between disulphide arrangement and disulphide concentration. To investigate this, a set of sequence nonhomologous protein chains containing one or more intramolecular disulphide bonds was extracted from the Protein Data Bank, and the arrangements of the bonds, Protein Data Bank header, and Structural Characterization of Proteins fold were analyzed as a function of intramolecular, containing proteins were disulphide bond concentration. Two populations of intramolecular disulphide bond-containing identified, with a naturally occurring partition at 25 residues per bond. These populations were named intramolecular disulphide bond-rich and -poor. Benefits of partitioning were illustrated by three results: (1) rich chains most frequently contained three disulphides, explaining the plateaux in extant disulphide frequency distributions; (2) a positive relationship between median chain length and the number of disulphides, only seen when the data were partitioned-, and (3) the most common bonding pattern for chains with three disulphide bonds was based on the most common for two, only when the data were partitioned. The two populations had different headers, folds, bond arrangements, and chain lengths. Associations between IDSB concentration, IDSB bonding pattern, loop sizes, SCOP fold, and PDB header were also found. From this, we found that intramolecular disulphide bond-rich and -poor proteins follow different bonding rules, and must be considered separately to generate meaningful models of bond formation.
Resumo:
There is considerable evidence that working memory impairment is a common feature of schizophrenia. The present study assessed working memory and executive function in 54 participants with schizophrenia, and a group of 54 normal controls matched to the patients on age, gender and estimated premorbid IQ, using traditional and newer measures of executive function and two dual tasks-Telephone Search with Counting and the Memory Span and Tracking Task. Results indicated that participants with schizophrenia were significantly impaired on all standardised measures of executive function with the exception of a composite measure of the Trail Making Test. Results for the dual task measures demonstrated that while the participants with schizophrenia were unimpaired on immediate digit span recall over a 2-min period, they recalled fewer digit strings and performed more poorly on a tracking task (box-crossing task) compared with controls. In addition, participants with schizophrenia performed more poorly on the tracking task when they were required to simultaneously recall digits strings than when they performed this task alone. Contrary to expectation, results of the telephone search task under dual conditions were not significantly different between groups. These results may reflect the insufficient complexity of the tone-counting task as an interference task. Overall, the present study showed that participants with schizophrenia appear to have a restricted impairment of their working memory system that is evident in tasks in which the visuospatial sketchpad slave system requires central executive control. (c) 2005 Elsevier Ireland Ltd. All rights reserved.
Resumo:
Markov chain Monte Carlo (MCMC) is a methodology that is gaining widespread use in the phylogenetics community and is central to phylogenetic software packages such as MrBayes. An important issue for users of MCMC methods is how to select appropriate values for adjustable parameters such as the length of the Markov chain or chains, the sampling density, the proposal mechanism, and, if Metropolis-coupled MCMC is being used, the number of heated chains and their temperatures. Although some parameter settings have been examined in detail in the literature, others are frequently chosen with more regard to computational time or personal experience with other data sets. Such choices may lead to inadequate sampling of tree space or an inefficient use of computational resources. We performed a detailed study of convergence and mixing for 70 randomly selected, putatively orthologous protein sets with different sizes and taxonomic compositions. Replicated runs from multiple random starting points permit a more rigorous assessment of convergence, and we developed two novel statistics, delta and epsilon, for this purpose. Although likelihood values invariably stabilized quickly, adequate sampling of the posterior distribution of tree topologies took considerably longer. Our results suggest that multimodality is common for data sets with 30 or more taxa and that this results in slow convergence and mixing. However, we also found that the pragmatic approach of combining data from several short, replicated runs into a metachain to estimate bipartition posterior probabilities provided good approximations, and that such estimates were no worse in approximating a reference posterior distribution than those obtained using a single long run of the same length as the metachain. Precision appears to be best when heated Markov chains have low temperatures, whereas chains with high temperatures appear to sample trees with high posterior probabilities only rarely. [Bayesian phylogenetic inference; heating parameter; Markov chain Monte Carlo; replicated chains.]
Resumo:
Tetrapeptide analogue H-[Glu-Ser-Lys(Thz)]-OH, containing a turn-inducing thiazole constraint, was used as a template to produce a 21-membered structurally characterized loop by linking Glu and Lys side chains with a Val-Ile dipeptide. This template was oligomerized in one pot to a library (cyclo-[1](n), n = 2-10) of giant symmetrical macrocycles (up to 120-membered rings), fused to 2-10 appended loops that were carried intact through multiple oligomerization (chain extension) and cyclization (chain terminating) reactions of the template. A three-dimensional solution structure for cyclo-[1](3) shows all three appended loops projecting from the same face of the macrocycle. This is a promising approach to separating pepticle motifs over large distances.
Resumo:
The physical implementation of quantum information processing is one of the major challenges of current research. In the last few years, several theoretical proposals and experimental demonstrations on a small number of qubits have been carried out, but a quantum computing architecture that is straightforwardly scalable, universal, and realizable with state-of-the-art technology is still lacking. In particular, a major ultimate objective is the construction of quantum simulators, yielding massively increased computational power in simulating quantum systems. Here we investigate promising routes towards the actual realization of a quantum computer, based on spin systems. The first one employs molecular nanomagnets with a doublet ground state to encode each qubit and exploits the wide chemical tunability of these systems to obtain the proper topology of inter-qubit interactions. Indeed, recent advances in coordination chemistry allow us to arrange these qubits in chains, with tailored interactions mediated by magnetic linkers. These act as switches of the effective qubit-qubit coupling, thus enabling the implementation of one- and two-qubit gates. Molecular qubits can be controlled either by uniform magnetic pulses, either by local electric fields. We introduce here two different schemes for quantum information processing with either global or local control of the inter-qubit interaction and demonstrate the high performance of these platforms by simulating the system time evolution with state-of-the-art parameters. The second architecture we propose is based on a hybrid spin-photon qubit encoding, which exploits the best characteristic of photons, whose mobility is exploited to efficiently establish long-range entanglement, and spin systems, which ensure long coherence times. The setup consists of spin ensembles coherently coupled to single photons within superconducting coplanar waveguide resonators. The tunability of the resonators frequency is exploited as the only manipulation tool to implement a universal set of quantum gates, by bringing the photons into/out of resonance with the spin transition. The time evolution of the system subject to the pulse sequence used to implement complex quantum algorithms has been simulated by numerically integrating the master equation for the system density matrix, thus including the harmful effects of decoherence. Finally a scheme to overcome the leakage of information due to inhomogeneous broadening of the spin ensemble is pointed out. Both the proposed setups are based on state-of-the-art technological achievements. By extensive numerical experiments we show that their performance is remarkably good, even for the implementation of long sequences of gates used to simulate interesting physical models. Therefore, the here examined systems are really promising buildingblocks of future scalable architectures and can be used for proof-of-principle experiments of quantum information processing and quantum simulation.
Resumo:
Relationships between clustering, description length, and regularisation are pointed out, motivating the introduction of a cost function with a description length interpretation and the unusual and useful property of having its minimum approximated by the densest mode of a distribution. A simple inverse kinematics example is used to demonstrate that this property can be used to select and learn one branch of a multi-valued mapping. This property is also used to develop a method for setting regularisation parameters according to the scale on which structure is exhibited in the training data. The regularisation technique is demonstrated on two real data sets, a classification problem and a regression problem.
Resumo:
Purpose. To use anterior segment optical coherence tomography (AS-OCT) to analyze ciliary muscle morphology and changes with accommodation and axial ametropia. Methods. Fifty prepresbyopic volunteers, aged 19 to 34 years were recruited. High-resolution images were acquired of nasal and temporal ciliary muscles in the relaxed state and at stimulus vergence levels of -4 and -8 D. Objective accommodative responses and axial lengths were also recorded. Two-way, mixed-factor analyses of variance (ANOVAs) were used to assess the changes in ciliary muscle parameters with accommodation and determine whether these changes are dependent on the nasal–temporal aspect or axial length, whereas linear regression analysis was used to analyze the relationship between axial length and ciliary muscle length. Results. The ciliary muscle was longer (r = 0.34, P = 0.02), but not significantly thicker (F = 2.84, P = 0.06), in eyes with greater axial length. With accommodation, the ciliary muscle showed a contractile shortening (F = 42.9. P < 0.001), particularly anteriorly (F = 177.2, P < 0.001), and a thickening of the anterior portion (F= 46.2, P < 0.001). The ciliary muscle was thicker (F = 17.8, P < 0.001) and showed a greater contractile response on the temporal side. Conclusions. The accommodative changes observed support an anterior, as well as centripetal, contractile shift of ciliary muscle mass.
Resumo:
Conventional feed forward Neural Networks have used the sum-of-squares cost function for training. A new cost function is presented here with a description length interpretation based on Rissanen's Minimum Description Length principle. It is a heuristic that has a rough interpretation as the number of data points fit by the model. Not concerned with finding optimal descriptions, the cost function prefers to form minimum descriptions in a naive way for computational convenience. The cost function is called the Naive Description Length cost function. Finding minimum description models will be shown to be closely related to the identification of clusters in the data. As a consequence the minimum of this cost function approximates the most probable mode of the data rather than the sum-of-squares cost function that approximates the mean. The new cost function is shown to provide information about the structure of the data. This is done by inspecting the dependence of the error to the amount of regularisation. This structure provides a method of selecting regularisation parameters as an alternative or supplement to Bayesian methods. The new cost function is tested on a number of multi-valued problems such as a simple inverse kinematics problem. It is also tested on a number of classification and regression problems. The mode-seeking property of this cost function is shown to improve prediction in time series problems. Description length principles are used in a similar fashion to derive a regulariser to control network complexity.
Resumo:
A local area network that can support both voice and data packets offers economic advantages due to the use of only a single network for both types of traffic, greater flexibility to changing user demands, and it also enables efficient use to be made of the transmission capacity. The latter aspect is very important in local broadcast networks where the capacity is a scarce resource, for example mobile radio. This research has examined two types of local broadcast network, these being the Ethernet-type bus local area network and a mobile radio network with a central base station. With such contention networks, medium access control (MAC) protocols are required to gain access to the channel. MAC protocols must provide efficient scheduling on the channel between the distributed population of stations who want to transmit. No access scheme can exceed the performance of a single server queue, due to the spatial distribution of the stations. Stations cannot in general form a queue without using part of the channel capacity to exchange protocol information. In this research, several medium access protocols have been examined and developed in order to increase the channel throughput compared to existing protocols. However, the established performance measures of average packet time delay and throughput cannot adequately characterise protocol performance for packet voice. Rather, the percentage of bits delivered within a given time bound becomes the relevant performance measure. Performance evaluation of the protocols has been examined using discrete event simulation and in some cases also by mathematical modelling. All the protocols use either implicit or explicit reservation schemes, with their efficiency dependent on the fact that many voice packets are generated periodically within a talkspurt. Two of the protocols are based on the existing 'Reservation Virtual Time CSMA/CD' protocol, which forms a distributed queue through implicit reservations. This protocol has been improved firstly by utilising two channels, a packet transmission channel and a packet contention channel. Packet contention is then performed in parallel with a packet transmission to increase throughput. The second protocol uses variable length packets to reduce the contention time between transmissions on a single channel. A third protocol developed, is based on contention for explicit reservations. Once a station has achieved a reservation, it maintains this effective queue position for the remainder of the talkspurt and transmits after it has sensed the transmission from the preceeding station within the queue. In the mobile radio environment, adaptions to the protocols were necessary in order that their operation was robust to signal fading. This was achieved through centralised control at a base station, unlike the local area network versions where the control was distributed at the stations. The results show an improvement in throughput compared to some previous protocols. Further work includes subjective testing to validate the protocols' effectiveness.
Resumo:
We study memory effects in a kinetic roughening model. For d=1, a different dynamic scaling is uncovered in the memory dominated phases; the Kardar-Parisi-Zhang scaling is restored in the absence of noise. dc=2 represents the critical dimension where memory is shown to smoothen the roughening front (a=0). Studies on a discrete atomistic model in the same universality class reconfirm the analytical results in the large time limit, while a different scaling behavior shows up for t
Resumo:
Background - Modelling the interaction between potentially antigenic peptides and Major Histocompatibility Complex (MHC) molecules is a key step in identifying potential T-cell epitopes. For Class II MHC alleles, the binding groove is open at both ends, causing ambiguity in the positional alignment between the groove and peptide, as well as creating uncertainty as to what parts of the peptide interact with the MHC. Moreover, the antigenic peptides have variable lengths, making naive modelling methods difficult to apply. This paper introduces a kernel method that can handle variable length peptides effectively by quantifying similarities between peptide sequences and integrating these into the kernel. Results - The kernel approach presented here shows increased prediction accuracy with a significantly higher number of true positives and negatives on multiple MHC class II alleles, when testing data sets from MHCPEP [1], MCHBN [2], and MHCBench [3]. Evaluation by cross validation, when segregating binders and non-binders, produced an average of 0.824 AROC for the MHCBench data sets (up from 0.756), and an average of 0.96 AROC for multiple alleles of the MHCPEP database. Conclusion - The method improves performance over existing state-of-the-art methods of MHC class II peptide binding predictions by using a custom, knowledge-based representation of peptides. Similarity scores, in contrast to a fixed-length, pocket-specific representation of amino acids, provide a flexible and powerful way of modelling MHC binding, and can easily be applied to other dynamic sequence problems.
Resumo:
Basic concepts for an interval arithmetic standard are discussed in the paper. Interval arithmetic deals with closed and connected sets of real numbers. Unlike floating-point arithmetic it is free of exceptions. A complete set of formulas to approximate real interval arithmetic on the computer is displayed in section 3 of the paper. The essential comparison relations and lattice operations are discussed in section 6. Evaluation of functions for interval arguments is studied in section 7. The desirability of variable length interval arithmetic is also discussed in the paper. The requirement to adapt the digital computer to the needs of interval arithmetic is as old as interval arithmetic. An obvious, simple possible solution is shown in section 8.
Resumo:
We examined methods of controlling the pulse duration, spectral width and wavelength of the output from an all-fiber Yb laser mode-locked by carbon nanotubes. It is shown that a segment of polarization maintaining (PM) fiber inserted into a standard single mode fiber based laser cavity can function as a spectral selective filter. Adjustment of the length of the PM fiber from 1 to 2 m led to a corresponding variation in the pulse duration from 2 to 3.8 ps, the spectral bandwidth of the laser output changes from 0.15 to 1.26 nm. Laser output wavelength detuning within up to 5 nm was demonstrated with a fixed length of the PM fiber by adjustment of the polarization controller. © 2012 Optical Society of America.