975 resultados para Binary Cyclically Permutable Constant Weight Codes


Relevância:

30.00% 30.00%

Publicador:

Resumo:

Oral liquid formulations are ideal dosage forms for paediatric, geriatric and patient with dysphagia. Dysphagia is prominent among patients suffering from stroke, motor neurone disease, advanced Alzheimer’s and Parkinson’s disease. However oral liquid preparations are particularly difficult to formulate for hydrophobic and unstable drugs. Therefore current methods employed in solving this issue include the use of ‘specials’ or extemporaneous preparations. In order to challenge this, the government has encouraged research into the field of oral liquid formulations, with the EMEA and MHRA publishing list of drugs of interest. The current work investigates strategic formulation development and characterisation of select API’s (captopril, gliclazide, melatonin, L-arginine and lansoprazole), each with unique obstacles to overcome during solubilisation, stabilisation and when developing a palatable dosage from. By preparing a validated calibration protocol for each of the drug candidates, the oral liquid formulations were assessed for stability, according to the ICH guidelines along with thorough physiochemical characterisation. The results showed that pH and polarity of the solvent had the greatest influence on the extent of drug solubilisation, with inclusion of antioxidants and molecular steric hindrance influencing the extent of drug stability. Captopril, a hydrophilic ACE inhibitor (160 mg.mL-1), undergoes dimerisation with another captopril molecule. It was found that with the addition of EDTA and HP-β-CD, the drug molecule was stabilised and prevented from initiating a thiol induced first order free radical oxidation. The cyclodextrin provided further steric hindrance (1:1 molar ratio) resulting in complete reduction of the intensity of sulphur like smell associated with captopril. Palatability is a crucial factor in patient compliance, particularly when developing a dosage form targeted towards paediatrics. L-arginine is extremely bitter in solution (148.7 g.L-1). The addition of tartaric acid into the 100 mg.mL-1 formulation was sufficient to mask the bitterness associated with its guanidium ions. The hydrophobicity of gliclazide (55 mg.L-1) was strategically challenged using a binary system of a co-solvent and surfactant to reduce the polarity of the medium and ultimately increase the solubility of the drug. A second simpler method was developed using pH modification with L-arginine. Melatonin has two major obstacles in formulation: solubility (100 μg.mL-1) and photosensitivity, which were both overcome by lowering the dielectric constant of the medium and by reversibly binding the drug within the cyclodextrin cup (1:1 ratio). The cyclodextrin acts by preventing UV rays from reaching the drug molecule and initiated the degradation pathway. Lansoprazole is an acid labile drug that could only be delivered orally via a delivery vehicle. In oral liquid preparations this involved nanoparticulate vesicles. The extent of drug loading was found to be influenced by the type of polymer, concentration of polymer, and the molecular weight. All of the formulations achieved relatively long shelf-lives with good preservative efficacy.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Typical performance of low-density parity-check (LDPC) codes over a general binary-input output-symmetric memoryless channel is investigated using methods of statistical mechanics. Relationship between the free energy in statistical-mechanics approach and the mutual information used in the information-theory literature is established within a general framework; Gallager and MacKay-Neal codes are studied as specific examples of LDPC codes. It is shown that basic properties of these codes known for particular channels, including their potential to saturate Shannon's bound, hold for general symmetric channels. The binary-input additive-white-Gaussian-noise channel and the binary-input Laplace channel are considered as specific channel models.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In 1965 Levenshtein introduced the deletion correcting codes and found an asymptotically optimal family of 1-deletion correcting codes. During the years there has been a little or no research on t-deletion correcting codes for larger values of t. In this paper, we consider the problem of finding the maximal cardinality L2(n;t) of a binary t-deletion correcting code of length n. We construct an infinite family of binary t-deletion correcting codes. By computer search, we construct t-deletion codes for t = 2;3;4;5 with lengths n ≤ 30. Some of these codes improve on earlier results by Hirschberg-Fereira and Swart-Fereira. Finally, we prove a recursive upper bound on L2(n;t) which is asymptotically worse than the best known bounds, but gives better estimates for small values of n.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Similar to classic Signal Detection Theory (SDT), recent optimal Binary Signal Detection Theory (BSDT) and based on it Neural Network Assembly Memory Model (NNAMM) can successfully reproduce Receiver Operating Characteristic (ROC) curves although BSDT/NNAMM parameters (intensity of cue and neuron threshold) and classic SDT parameters (perception distance and response bias) are essentially different. In present work BSDT/NNAMM optimal likelihood and posterior probabilities are analytically analyzed and used to generate ROCs and modified (posterior) mROCs, optimal overall likelihood and posterior. It is shown that for the description of basic discrimination experiments in psychophysics within the BSDT a ‘neural space’ can be introduced where sensory stimuli as neural codes are represented and decision processes are defined, the BSDT’s isobias curves can simultaneously be interpreted as universal psychometric functions satisfying the Neyman-Pearson objective, the just noticeable difference (jnd) can be defined and interpreted as an atom of experience, and near-neutral values of biases are observers’ natural choice. The uniformity or no-priming hypotheses, concerning the ‘in-mind’ distribution of false-alarm probabilities during ROC or overall probability estimations, is introduced. The BSDT’s and classic SDT’s sensitivity, bias, their ROC and decision spaces are compared.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Mathematics Subject Classification: 26D10.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Rising concentrations of atmospheric CO2 are changing the carbonate chemistry of the oceans, a process known as ocean acidification (OA). Absorption of this CO2 by the surface oceans is increasing the amount of total dissolved inorganic carbon (DIC) and bicarbonate ion (HCO3) available for marine calcification yet is simultaneously lowering the seawater pH and carbonate ion concentration ([CO3]), and thus the saturation state of seawater with respect to aragonite. We investigated the relative importance of [HCO3] versus [CO3] for early calcification by new recruits (primary polyps settled from zooxanthellate larvae) of two tropical coral species, Favia fragum and Porites astreoides. The polyps were reared over a range of ?ar values, which were manipulated by both acid-addition at constant pCO2 (decreased total [HCO3] and [CO3]) and by pCO2 elevation at constant alkalinity (increased [HCO3], decreased [CO3]). Calcification after 2 weeks was quantified by weighing the complete skeleton (corallite) accreted by each polyp over the course of the experiment. Both species exhibited the same negative response to decreasing [CO3] whether ?ar was lowered by acid-addition or by pCO2 elevation-calcification did not follow total DIC or [HCO3]. Nevertheless, the calcification response to decreasing [CO3] was nonlinear. A statistically significant decrease in calcification was only detected between Omega aragonite = <2.5 and Omega aragonite = 1.1-1.5, where calcification of new recruits was reduced by 22-37% per 1.0 decrease in Omega aragonite. Our results differ from many previous studies that report a linear coral calcification response to OA, and from those showing that calcification increases with increasing [HCO3]. Clearly, the coral calcification response to OA is variable and complex. A deeper understanding of the biomineralization mechanisms and environmental conditions underlying these variable responses is needed to support informed predictions about future OA impacts on corals and coral reefs.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Performing experiments on small-scale quantum computers is certainly a challenging endeavor. Many parameters need to be optimized to achieve high-fidelity operations. This can be done efficiently for operations acting on single qubits, as errors can be fully characterized. For multiqubit operations, though, this is no longer the case, as in the most general case, analyzing the effect of the operation on the system requires a full state tomography for which resources scale exponentially with the system size. Furthermore, in recent experiments, additional electronic levels beyond the two-level system encoding the qubit have been used to enhance the capabilities of quantum-information processors, which additionally increases the number of parameters that need to be controlled. For the optimization of the experimental system for a given task (e.g., a quantum algorithm), one has to find a satisfactory error model and also efficient observables to estimate the parameters of the model. In this manuscript, we demonstrate a method to optimize the encoding procedure for a small quantum error correction code in the presence of unknown but constant phase shifts. The method, which we implement here on a small-scale linear ion-trap quantum computer, is readily applicable to other AMO platforms for quantum-information processing.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The presence of high phase noise in addition to additive white Gaussian noise in coherent optical systems affects the performance of forward error correction (FEC) schemes. In this paper, we propose a simple scheme for such systems, using block interleavers and binary Bose–Chaudhuri–Hocquenghem (BCH) codes. The block interleavers are specifically optimized for differential quadrature phase shift keying modulation. We propose a method for selecting BCH codes that, together with the interleavers, achieve a target post-FEC bit error rate (BER). This combination of interleavers and BCH codes has very low implementation complexity. In addition, our approach is straightforward, requiring only short pre-FEC simulations to parameterize a model, based on which we select codes analytically. We aim to correct a pre-FEC BER of around (Formula presented.). We evaluate the accuracy of our approach using numerical simulations. For a target post-FEC BER of (Formula presented.), codes selected using our method result in BERs around 3(Formula presented.) target and achieve the target with around 0.2 dB extra signal-to-noise ratio.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Modelling of massive stars and supernovae (SNe) plays a crucial role in understanding galaxies. From this modelling we can derive fundamental constraints on stellar evolution, mass-loss processes, mixing, and the products of nucleosynthesis. Proper account must be taken of all important processes that populate and depopulate the levels (collisional excitation, de-excitation, ionization, recombination, photoionization, bound–bound processes). For the analysis of Type Ia SNe and core collapse SNe (Types Ib, Ic and II) Fe group elements are particularly important. Unfortunately little data is currently available and most noticeably absent are the photoionization cross-sections for the Fe-peaks which have high abundances in SNe. Important interactions for both photoionization and electron-impact excitation are calculated using the relativistic Dirac atomic R-matrix codes (DARC) for low-ionization stages of Cobalt. All results are calculated up to photon energies of 45 eV and electron energies up to 20 eV. The wavefunction representation of Co III has been generated using GRASP0 by including the dominant 3d7, 3d6[4s, 4p], 3p43d9 and 3p63d9 configurations, resulting in 292 fine structure levels. Electron-impact collision strengths and Maxwellian averaged effective collision strengths across a wide range of astrophysically relevant temperatures are computed for Co III. In addition, statistically weighted level-resolved ground and metastable photoionization cross-sections are presented for Co II and compared directly with existing work.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper revisits strongly-MDS convolutional codes with maximum distance profile (MDP). These are (non-binary) convolutional codes that have an optimum sequence of column distances and attains the generalized Singleton bound at the earliest possible time frame. These properties make these convolutional codes applicable over the erasure channel, since they are able to correct a large number of erasures per time interval. The existence of these codes have been shown only for some specific cases. This paper shows by construction the existence of convolutional codes that are both strongly-MDS and MDP for all choices of parameters.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Background: Body composition is affected by diseases, and affects responses to medical treatments, dosage of medicines, etc., while an abnormal body composition contributes to the causation of many chronic diseases. While we have reliable biochemical tests for certain nutritional parameters of body composition, such as iron or iodine status, and we have harnessed nuclear physics to estimate the body’s content of trace elements, the very basic quantification of body fat content and muscle mass remains highly problematic. Both body fat and muscle mass are vitally important, as they have opposing influences on chronic disease, but they have seldom been estimated as part of population health surveillance. Instead, most national surveys have merely reported BMI and waist, or sometimes the waist/hip ratio; these indices are convenient but do not have any specific biological meaning. Anthropometry offers a practical and inexpensive method for muscle and fat estimation in clinical and epidemiological settings; however, its use is imperfect due to many limitations, such as a shortage of reference data, misuse of terminology, unclear assumptions, and the absence of properly validated anthropometric equations. To date, anthropometric methods are not sensitive enough to detect muscle and fat loss. Aims: The aim of this thesis is to estimate Adipose/fat and muscle mass in health disease and during weight loss through; 1. evaluating and critiquing the literature, to identify the best-published prediction equations for adipose/fat and muscle mass estimation; 2. to derive and validate adipose tissue and muscle mass prediction equations; and 3.to evaluate the prediction equations along with anthropometric indices and the best equations retrieved from the literature in health, metabolic illness and during weight loss. Methods: a Systematic review using Cochrane Review method was used for reviewing muscle mass estimation papers that used MRI as the reference method. Fat mass estimation papers were critically reviewed. Mixed ethnic, age and body mass data that underwent whole body magnetic resonance imaging to quantify adipose tissue and muscle mass (dependent variable) and anthropometry (independent variable) were used in the derivation/validation analysis. Multiple regression and Bland-Altman plot were applied to evaluate the prediction equations. To determine how well the equations identify metabolic illness, English and Scottish health surveys were studied. Statistical analysis using multiple regression and binary logistic regression were applied to assess model fit and associations. Also, populations were divided into quintiles and relative risk was analysed. Finally, the prediction equations were evaluated by applying them to a pilot study of 10 subjects who underwent whole-body MRI, anthropometric measurements and muscle strength before and after weight loss to determine how well the equations identify adipose/fat mass and muscle mass change. Results: The estimation of fat mass has serious problems. Despite advances in technology and science, prediction equations for the estimation of fat mass depend on limited historical reference data and remain dependent upon assumptions that have not yet been properly validated for different population groups. Muscle mass does not have the same conceptual problems; however, its measurement is still problematic and reference data are scarce. The derivation and validation analysis in this thesis was satisfactory, compared to prediction equations in the literature they were similar or even better. Applying the prediction equations in metabolic illness and during weight loss presented an understanding on how well the equations identify metabolic illness showing significant associations with diabetes, hypertension, HbA1c and blood pressure. And moderate to high correlations with MRI-measured adipose tissue and muscle mass before and after weight loss. Conclusion: Adipose tissue mass and to an extent muscle mass can now be estimated for many purposes as population or groups means. However, these equations must not be used for assessing fatness and categorising individuals. Further exploration in different populations and health surveys would be valuable.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The size of online image datasets is constantly increasing. Considering an image dataset with millions of images, image retrieval becomes a seemingly intractable problem for exhaustive similarity search algorithms. Hashing methods, which encodes high-dimensional descriptors into compact binary strings, have become very popular because of their high efficiency in search and storage capacity. In the first part, we propose a multimodal retrieval method based on latent feature models. The procedure consists of a nonparametric Bayesian framework for learning underlying semantically meaningful abstract features in a multimodal dataset, a probabilistic retrieval model that allows cross-modal queries and an extension model for relevance feedback. In the second part, we focus on supervised hashing with kernels. We describe a flexible hashing procedure that treats binary codes and pairwise semantic similarity as latent and observed variables, respectively, in a probabilistic model based on Gaussian processes for binary classification. We present a scalable inference algorithm with the sparse pseudo-input Gaussian process (SPGP) model and distributed computing. In the last part, we define an incremental hashing strategy for dynamic databases where new images are added to the databases frequently. The method is based on a two-stage classification framework using binary and multi-class SVMs. The proposed method also enforces balance in binary codes by an imbalance penalty to obtain higher quality binary codes. We learn hash functions by an efficient algorithm where the NP-hard problem of finding optimal binary codes is solved via cyclic coordinate descent and SVMs are trained in a parallelized incremental manner. For modifications like adding images from an unseen class, we propose an incremental procedure for effective and efficient updates to the previous hash functions. Experiments on three large-scale image datasets demonstrate that the incremental strategy is capable of efficiently updating hash functions to the same retrieval performance as hashing from scratch.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The quality of the image of 18F-FDG PET/CT scans in overweight patients is commonly degraded. This study evaluates, retrospectively, the relation between SNR, weight and dose injected in 65 patients, with a range of weights from 35 to 120 kg, with scans performed using the Biograph mCT using a standardized protocol in the Nuclear Medicine Department at Radboud University Medical Centre in Nijmegen, The Netherlands. Five ROI’s were made in the liver, assumed to be an organ of homogenous metabolism, at the same location, in five consecutive slices of the PET/CT scans to obtain the mean uptake (signal) values and its standard deviation (noise). The ratio of both gave us the Signal-to- Noise Ratio in the liver. With the help of a spreadsheet, weight, height, SNR and Body Mass Index were calculated and graphs were designed in order to obtain the relation between these factors. The graphs showed that SNR decreases as the body weight and/or BMI increased and also showed that, even though the dose injected increased, the SNR also decreased. This is due to the fact that heavier patients receive higher dose and, as reported, heavier patients have less SNR. These findings suggest that the quality of the images, measured by SNR, that were acquired in heavier patients are worst than thinner patients, even though higher FDG doses are given. With all this taken in consideration, it was necessary to make a new formula to calculate a new dose to give to patients and having a good and constant SNR in every patient. Through mathematic calculations, it was possible to reach to two new equations (power and exponential), which would lead to a SNR from a scan made with a specific reference weight (86 kg was the considered one) which was independent of body mass. The study implies that with these new formulas, patients heavier than the reference weight will receive higher doses and lighter patients will receive less doses. With the median being 86 kg, the new dose and new SNR was calculated and concluded that the quality of the image remains almost constant as the weight increases and the quantity of the necessary FDG remains almost the same, without increasing the costs for the total amount of FDG used in all these patients.