978 resultados para Optimal Linear Codes
Resumo:
Lynch's (1980a) optimal-body-size model is designed to explain some major trends in cladoceran life histories; in particular the fact that large and littoral species seem to be bang-bang strategists (they grow first and the reproduce) whereas smaller planktonic species seem to be intermediate strategists (they grow and reproduce simultaneously). Predation is assumed to be an important selective pressure for these trends. Simocephalus vetulus (Müller) does not fit this pattern; being a littoral and relatively large species but an intermediate strategist. As shown by computer simulations, this species would reduce its per capita rate of increase by adopting the strategy predicted by the optimal-body-size model. Two aspects of the model are criticized: (1) the optimization criterion is shown to be incorrect and (2) the prediction of an intermediate strategy is not justified. Structural constraints are suggested to be responsible for the intermediate strategy of S.vetulus. Biotic interactions seem to have little effect on the observed life-history patterns of this species.
Resumo:
In addition to the importance of sample preparation and extract separation, MS detection is a key factor in the sensitive quantification of large undigested peptides. In this article, a linear ion trap MS (LIT-MS) and a triple quadrupole MS (TQ-MS) have been compared in the detection of large peptides at subnanomolar concentrations. Natural brain natriuretic peptide, C-peptide, substance P and D-Junk-inhibitor peptide, a full D-amino acid therapeutic peptide, were chosen. They were detected by ESI and simultaneous MS(1) and MS(2) acquisitions. With direct peptide infusion, MS(2) spectra revealed that fragmentation was peptide dependent, milder on the LIT-MS and required high collision energies on the TQ-MS to obtain high-intensity product ions. Peptide adsorption on surfaces was overcome and peptide dilutions ranging from 0.1 to 25 nM were injected onto an ultra high-pressure LC system with a 1 mm id analytical column and coupled with the MS instruments. No difference was observed between the two instruments when recording in LC-MS(1) acquisitions. However, in LC-MS(2) acquisitions, a better sensitivity in the detection of large peptides was observed with the LIT-MS. Indeed, with the three longer peptides, the typical fragmentation in the TQ-MS resulted in a dramatic loss of sensitivity (> or = 10x).
Resumo:
Background: Two or three DNA primes have been used in previous smaller clinical trials, but the number required for optimal priming of viral vectors has never been assessed in adequately powered clinical trials. The EV03/ANRS Vac20 phase I/II trial investigated this issue using the DNA prime/poxvirus NYVAC boost combination, both expressing a common HIV-1 clade C immunogen consisting of Env and Gag-Pol-Nef polypeptide. Methods: 147 healthy volunteers were randomly allocated through 8 European centres to either 3xDNA plus 1xNYVAC (weeks 0, 4, 8 plus 24; n¼74) or to 2xDNA plus 2xNYVAC (weeks 0, 4 plus 20, 24; n¼73), stratified by geographical region and sex. T cell responses were quantified using the interferon g Elispot assay and 8 peptide pools; samples from weeks 0, 26 and 28 (time points for primary immunogenicity endpoint), 48 and 72 were considered for this analysis. Results: 140 of 147 participants were evaluable at weeks 26 and/ or 28. 64/70 (91%) in the 3xDNA arm compared to 56/70 (80%) in the 2xDNA arm developed a T cell response (P¼0.053). 26 (37%) participants of the 3xDNA arm developed a broader T cell response (Env plus at least to one of the Gag, Pol, Nef peptide pools) versus 15 (22%) in the 2xDNA arm (P¼0.047). At week 26, the overall magnitude of responses was also higher in the 3xDNA than in the 2xDNA arm (similar at week 28), with a median of 545 versus 328 SFUs/106 cells at week 26 (P<0.001). Preliminary overall evaluation showed that participants still developed T-cell response at weeks 48 (78%, n¼67) and 72 (70%, n¼66). Conclusion: This large clinical trial demonstrates that optimal priming of poxvirus-based vaccine regimens requires 3 DNA regimens and further confirms that the DNA/NYVAC prime boost vaccine combination is highly immunogenic and induced durable T-cell responses.
Resumo:
This report describes a new approach to the problem of scheduling highway construction type projects. The technique can accurately model linear activities and identify the controlling activity path on a linear schedule. Current scheduling practices are unable to accomplish these two tasks with any accuracy for linear activities, leaving planners and manager suspicious of the information they provide. Basic linear scheduling is not a new technique, and many attempts have been made to apply it to various types of work in the past. However, the technique has never been widely used because of the lack of an analytical approach to activity relationships and development of an analytical approach to determining controlling activities. The Linear Scheduling Model (LSM) developed in this report, completes the linear scheduling technique by adding to linear scheduling all of the analytical capabilities, including computer applications, present in CPM scheduling today. The LSM has tremendous potential, and will likely have a significant impact on the way linear construction is scheduled in the future.
Resumo:
PURPOSE: The longitudinal relaxation rate (R1 ) measured in vivo depends on the local microstructural properties of the tissue, such as macromolecular, iron, and water content. Here, we use whole brain multiparametric in vivo data and a general linear relaxometry model to describe the dependence of R1 on these components. We explore a) the validity of having a single fixed set of model coefficients for the whole brain and b) the stability of the model coefficients in a large cohort. METHODS: Maps of magnetization transfer (MT) and effective transverse relaxation rate (R2 *) were used as surrogates for macromolecular and iron content, respectively. Spatial variations in these parameters reflected variations in underlying tissue microstructure. A linear model was applied to the whole brain, including gray/white matter and deep brain structures, to determine the global model coefficients. Synthetic R1 values were then calculated using these coefficients and compared with the measured R1 maps. RESULTS: The model's validity was demonstrated by correspondence between the synthetic and measured R1 values and by high stability of the model coefficients across a large cohort. CONCLUSION: A single set of global coefficients can be used to relate R1 , MT, and R2 * across the whole brain. Our population study demonstrates the robustness and stability of the model. Magn Reson Med, 2014. © 2014 The Authors. Magnetic Resonance in Medicine published by Wiley Periodicals, Inc. Magn Reson Med 73:1309-1314, 2015. © 2014 Wiley Periodicals, Inc.
Resumo:
OBJECTIVES: We have sought to develop an automated methodology for the continuous updating of optimal cerebral perfusion pressure (CPPopt) for patients after severe traumatic head injury, using continuous monitoring of cerebrovascular pressure reactivity. We then validated the CPPopt algorithm by determining the association between outcome and the deviation of actual CPP from CPPopt. DESIGN: Retrospective analysis of prospectively collected data. SETTING: Neurosciences critical care unit of a university hospital. PATIENTS: A total of 327 traumatic head-injury patients admitted between 2003 and 2009 with continuous monitoring of arterial blood pressure and intracranial pressure. MEASUREMENTS AND MAIN RESULTS: Arterial blood pressure, intracranial pressure, and CPP were continuously recorded, and pressure reactivity index was calculated online. Outcome was assessed at 6 months. An automated curve fitting method was applied to determine CPP at the minimum value for pressure reactivity index (CPPopt). A time trend of CPPopt was created using a moving 4-hr window, updated every minute. Identification of CPPopt was, on average, feasible during 55% of the whole recording period. Patient outcome correlated with the continuously updated difference between median CPP and CPPopt (chi-square=45, p<.001; outcome dichotomized into fatal and nonfatal). Mortality was associated with relative "hypoperfusion" (CPP<CPPopt), severe disability with "hyperperfusion" (CPP>CPPopt), and favorable outcome was associated with smaller deviations of CPP from the individualized CPPopt. While deviations from global target CPP values of 60 mm Hg and 70 mm Hg were also related to outcome, these relationships were less robust. CONCLUSIONS: Real-time CPPopt could be identified during the recording time of majority of the patients. Patients with a median CPP close to CPPopt were more likely to have a favorable outcome than those in whom median CPP was widely different from CPPopt. Deviations from individualized CPPopt were more predictive of outcome than deviations from a common target CPP. CPP management to optimize cerebrovascular pressure reactivity should be the subject of future clinical trial in severe traumatic head-injury patients.
Resumo:
Selostus: Politiikkamuutosten vaikutus lihanautojen optimaaliseen ruokintaan ja teurastuksen ajoitukseen
Resumo:
Constituant l'un des premiers « genres » de l'histoire du cinéma (dont Burch et Gaudreault ont montré le rôle fondateur dans la standardisation des procédures de montage institutionnalisées), les films mettant en scène la Vie et la Passion du Christ fixent leurs normes en s'appropriant des codes iconographiques préétablis. Dans cet article, Valentine Robert s'attache à déployer le « palimpseste » de ces Passions des premiers temps, à démêler les « séries culturelles » impliquées, à dégager les phénomènes de reprises d'une bande à l'autre, et à replacer certains de ces jeux référentiels dans leur visée de légitimation - ou doit-on dire « canonisation » ? - du médium cinématographique.
Resumo:
This study aimed to use the plantar pressure insole for estimating the three-dimensional ground reaction force (GRF) as well as the frictional torque (T(F)) during walking. Eleven subjects, six healthy and five patients with ankle disease participated in the study while wearing pressure insoles during several walking trials on a force-plate. The plantar pressure distribution was analyzed and 10 principal components of 24 regional pressure values with the stance time percentage (STP) were considered for GRF and T(F) estimation. Both linear and non-linear approximators were used for estimating the GRF and T(F) based on two learning strategies using intra-subject and inter-subjects data. The RMS error and the correlation coefficient between the approximators and the actual patterns obtained from force-plate were calculated. Our results showed better performance for non-linear approximation especially when the STP was considered as input. The least errors were observed for vertical force (4%) and anterior-posterior force (7.3%), while the medial-lateral force (11.3%) and frictional torque (14.7%) had higher errors. The result obtained for the patients showed higher error; nevertheless, when the data of the same patient were used for learning, the results were improved and in general slight differences with healthy subjects were observed. In conclusion, this study showed that ambulatory pressure insole with data normalization, an optimal choice of inputs and a well-trained nonlinear mapping function can estimate efficiently the three-dimensional ground reaction force and frictional torque in consecutive gait cycle without requiring a force-plate.
Resumo:
The choice network revenue management (RM) model incorporates customer purchase behavioras customers purchasing products with certain probabilities that are a function of the offeredassortment of products, and is the appropriate model for airline and hotel network revenuemanagement, dynamic sales of bundles, and dynamic assortment optimization. The underlyingstochastic dynamic program is intractable and even its certainty-equivalence approximation, inthe form of a linear program called Choice Deterministic Linear Program (CDLP) is difficultto solve in most cases. The separation problem for CDLP is NP-complete for MNL with justtwo segments when their consideration sets overlap; the affine approximation of the dynamicprogram is NP-complete for even a single-segment MNL. This is in contrast to the independentclass(perfect-segmentation) case where even the piecewise-linear approximation has been shownto be tractable. In this paper we investigate the piecewise-linear approximation for network RMunder a general discrete-choice model of demand. We show that the gap between the CDLP andthe piecewise-linear bounds is within a factor of at most 2. We then show that the piecewiselinearapproximation is polynomially-time solvable for a fixed consideration set size, bringing itinto the realm of tractability for small consideration sets; small consideration sets are a reasonablemodeling tradeoff in many practical applications. Our solution relies on showing that forany discrete-choice model the separation problem for the linear program of the piecewise-linearapproximation can be solved exactly by a Lagrangian relaxation. We give modeling extensionsand show by numerical experiments the improvements from using piecewise-linear approximationfunctions.
Resumo:
Polynomial constraint solving plays a prominent role in several areas of hardware and software analysis and verification, e.g., termination proving, program invariant generation and hybrid system verification, to name a few. In this paper we propose a new method for solving non-linear constraints based on encoding the problem into an SMT problem considering only linear arithmetic. Unlike other existing methods, our method focuses on proving satisfiability of the constraints rather than on proving unsatisfiability, which is more relevant in several applications as we illustrate with several examples. Nevertheless, we also present new techniques based on the analysis of unsatisfiable cores that allow one to efficiently prove unsatisfiability too for a broad class of problems. The power of our approach is demonstrated by means of extensive experiments comparing our prototype with state-of-the-art tools on benchmarks taken both from the academic and the industrial world.
Resumo:
Accurate diagnosis of orthopedic device-associated infections can be challenging. Culture of tissue biopsy specimens is often considered the gold standard; however, there is currently no consensus on the ideal incubation time for specimens. The aim of our study was to assess the yield of a 14-day incubation protocol for tissue biopsy specimens from revision surgery (joint replacements and internal fixation devices) in a general orthopedic and trauma surgery setting. Medical records were reviewed retrospectively in order to identify cases of infection according to predefined diagnostic criteria. From August 2009 to March 2012, 499 tissue biopsy specimens were sampled from 117 cases. In 70 cases (59.8%), at least one sample showed microbiological growth. Among them, 58 cases (82.9%) were considered infections and 12 cases (17.1%) were classified as contaminations. The median time to positivity in the cases of infection was 1 day (range, 1 to 10 days), compared to 6 days (range, 1 to 11 days) in the cases of contamination (P < 0.001). Fifty-six (96.6%) of the infection cases were diagnosed within 7 days of incubation. In conclusion, the results of our study show that the incubation of tissue biopsy specimens beyond 7 days is not productive in a general orthopedic and trauma surgery setting. Prolonged 14-day incubation might be of interest in particular situations, however, in which the prevalence of slow-growing microorganisms and anaerobes is higher.