79 resultados para design machine
Resumo:
The HbpR protein is the sigma54-dependent transcription activator for 2-hydroxybiphenyl degradation in Pseudomonas azelaica. The ability of HbpR and XylR, which share 35% amino acid sequence identity, to cross-activate the PhbpC and Pu promoters was investigated by determining HbpR- or XylR-mediated luciferase expression and by DNA binding assays. XylR measurably activated the PhbpC promoter in the presence of the effector m-xylene, both in Escherichia coli and Pseudomonas putida. HbpR weakly stimulated the Pu promoter in E. coli but not in P. azelaica. Poor HbpR-dependent activation from Pu was caused by a weak binding to the operator region. To create promoters efficiently activated by both regulators, the HbpR binding sites on PhbpC were gradually changed into the XylR binding sites of Pu by site-directed mutagenesis. Inducible luciferase expression from mutated promoters was tested in E. coli on a two plasmid system, and from mono copy gene fusions in P. azelaica and P. putida. Some mutants were efficiently activated by both HbpR and XylR, showing that promoters can be created which are permissive for both regulators. Others achieved a higher XylR-dependent transcription than from Pu itself. Mutants were also obtained which displayed a tenfold lower uninduced expression level by HbpR than the wild-type PhbpC, while keeping the same maximal induction level. On the basis of these results, a dual-responsive bioreporter strain of P. azelaica was created, containing both XylR and HbpR, and activating luciferase expression from the same single promoter independently with m-xylene and 2-hydroxybiphenyl.
Resumo:
Purpose Carbon-13 magnetic resonance spectroscopy (13C-MRS) is challenging because of the inherent low sensitivity of 13C detection and the need for radiofrequency transmission at the 1H frequency while receiving the 13C signal, the latter requiring electrical decoupling of the 13C and 1H radiofrequency channels. In this study, we added traps to the 13C coil to construct a quadrature-13C/quadrature-1H surface coil, with sufficient isolation between channels to allow simultaneous operation at both frequencies without compromise in coil performance. Methods Isolation between channels was evaluated on the bench by measuring all coupling parameters. The quadrature mode of the quadrature-13C coil was assessed using in vitro 23Na gradient echo images. The signal-to-noise ratio (SNR) was measured on the glycogen and glucose resonances by 13C-MRS in vitro, compared with that obtained with a linear-13C/quadrature-1H coil, and validated by 13C-MRS in vivo in the human calf at 7T. Results Isolation between channels was better than â^'30 dB. The 23Na gradient echo images indicate a region where the field is strongly circularly polarized. The quadrature coil provided an SNR enhancement over a linear coil of 1.4, in vitro and in vivo. Conclusion It is feasible to construct a double-quadrature 13C-1H surface coil for proton decoupled sensitivity enhanced 13C-NMR spectroscopy in humans at 7T. Magn Reson Med, 2014. © 2014 Wiley Periodicals, Inc.
Resumo:
PURPOSE: Multinuclear magnetic resonance spectroscopy and imaging require a radiofrequency probe capable of transmitting and receiving at the proton and non-proton frequencies. To minimize coupling between probe elements tuned to different frequencies, LC (inductor-capacitor) traps blocking current at the (1) H frequency can be inserted in non-proton elements. This work compares LC traps with LCC traps, a modified design incorporating an additional capacitor, enabling control of the trap reactance at the low frequency while maintaining (1) H blocking. METHODS: Losses introduced by both types of trap were analysed using circuit models. Radiofrequency coils incorporating a series of LC and LCC traps were then built and evaluated at the bench. LCC trap performance was then confirmed using (1) H and (13) C measurements in a 7T human scanner. RESULTS: LC and LCC traps both effectively block interaction between non-proton and proton coils at the proton frequency. LCC traps were found to introduce a sensitivity reduction of 5±2%, which was less than half of that caused by LC traps. CONCLUSION: Sensitivity of non-proton coils is critical. The improved trap design, incorporating one extra capacitor, significantly reduces losses introduced by the trap in the non-proton coil. Magn Reson Med 72:584-590, 2014. © 2013 Wiley Periodicals, Inc.
Resumo:
BACKGROUND: Electroencephalography (EEG) is widely used to assess neurological prognosis in patients who are comatose after cardiac arrest, but its value is limited by varying definitions of pathological patterns and by inter-rater variability. The American Clinical Neurophysiology Society (ACNS) has recently proposed a standardized EEG-terminology for critical care to address these limitations. METHODS/DESIGN: In the TTM-trial, 399 post cardiac arrest patients who remained comatose after rewarming underwent a routine EEG. The presence of clinical seizures, use of sedatives and antiepileptic drugs during the EEG-registration were prospectively documented. DISCUSSION: A well-defined terminology for interpreting post cardiac arrest EEGs is critical for the use of EEG as a prognostic tool. TRIAL REGISTRATION: The TTM-trial is registered at ClinicalTrials.gov (NCT01020916).
Resumo:
To test whether quantitative traits are under directional or homogenizing selection, it is common practice to compare population differentiation estimates at molecular markers (F(ST)) and quantitative traits (Q(ST)). If the trait is neutral and its determinism is additive, then theory predicts that Q(ST) = F(ST), while Q(ST) > F(ST) is predicted under directional selection for different local optima, and Q(ST) < F(ST) is predicted under homogenizing selection. However, nonadditive effects can alter these predictions. Here, we investigate the influence of dominance on the relation between Q(ST) and F(ST) for neutral traits. Using analytical results and computer simulations, we show that dominance generally deflates Q(ST) relative to F(ST). Under inbreeding, the effect of dominance vanishes, and we show that for selfing species, a better estimate of Q(ST) is obtained from selfed families than from half-sib families. We also compare several sampling designs and find that it is always best to sample many populations (>20) with few families (five) rather than few populations with many families. Provided that estimates of Q(ST) are derived from individuals originating from many populations, we conclude that the pattern Q(ST) > F(ST), and hence the inference of directional selection for different local optima, is robust to the effect of nonadditive gene actions.
Resumo:
Indoleamine 2,3-dioxygenase (IDO) is an important therapeutic target for the treatment of diseases such as cancer that involve pathological immune escape. We have used the evolutionary docking algorithm EADock to design new inhibitors of this enzyme. First, we investigated the modes of binding of all known IDO inhibitors. On the basis of the observed docked conformations, we developed a pharmacophore model, which was then used to devise new compounds to be tested for IDO inhibition. We also used a fragment-based approach to design and to optimize small organic molecule inhibitors. Both approaches yielded several new low-molecular weight inhibitor scaffolds, the most active being of nanomolar potency in an enzymatic assay. Cellular assays confirmed the potential biological relevance of four different scaffolds.
Resumo:
The research considers the problem of spatial data classification using machine learning algorithms: probabilistic neural networks (PNN) and support vector machines (SVM). As a benchmark model simple k-nearest neighbor algorithm is considered. PNN is a neural network reformulation of well known nonparametric principles of probability density modeling using kernel density estimator and Bayesian optimal or maximum a posteriori decision rules. PNN is well suited to problems where not only predictions but also quantification of accuracy and integration of prior information are necessary. An important property of PNN is that they can be easily used in decision support systems dealing with problems of automatic classification. Support vector machine is an implementation of the principles of statistical learning theory for the classification tasks. Recently they were successfully applied for different environmental topics: classification of soil types and hydro-geological units, optimization of monitoring networks, susceptibility mapping of natural hazards. In the present paper both simulated and real data case studies (low and high dimensional) are considered. The main attention is paid to the detection and learning of spatial patterns by the algorithms applied.
Resumo:
Machine learning has been largely applied to analyze data in various domains, but it is still new to personalized medicine, especially dose individualization. In this paper, we focus on the prediction of drug concentrations using Support Vector Machines (S VM) and the analysis of the influence of each feature to the prediction results. Our study shows that SVM-based approaches achieve similar prediction results compared with pharmacokinetic model. The two proposed example-based SVM methods demonstrate that the individual features help to increase the accuracy in the predictions of drug concentration with a reduced library of training data.
Resumo:
We developed a method of sample preparation using epoxy compound, which was validated in two steps. First, we studied the homogeneity within samples by scanning tubes filled with radioactive epoxy. We found within-sample homogeneity better than 2%. Then, we studied the homogeneity between samples during a 4.5 h dispensing time. The homogeneity between samples was found to be better than 2%. This study demonstrates that we have a validated method, which assures the traceability of epoxy samples.
Resumo:
La présente étude est à la fois une évaluation du processus de la mise en oeuvre et des impacts de la police de proximité dans les cinq plus grandes zones urbaines de Suisse - Bâle, Berne, Genève, Lausanne et Zurich. La police de proximité (community policing) est à la fois une philosophie et une stratégie organisationnelle qui favorise un partenariat renouvelé entre la police et les communautés locales dans le but de résoudre les problèmes relatifs à la sécurité et à l'ordre public. L'évaluation de processus a analysé des données relatives aux réformes internes de la police qui ont été obtenues par l'intermédiaire d'entretiens semi-structurés avec des administrateurs clés des cinq départements de police, ainsi que dans des documents écrits de la police et d'autres sources publiques. L'évaluation des impacts, quant à elle, s'est basée sur des variables contextuelles telles que des statistiques policières et des données de recensement, ainsi que sur des indicateurs d'impacts construit à partir des données du Swiss Crime Survey (SCS) relatives au sentiment d'insécurité, à la perception du désordre public et à la satisfaction de la population à l'égard de la police. Le SCS est un sondage régulier qui a permis d'interroger des habitants des cinq grandes zones urbaines à plusieurs reprises depuis le milieu des années 1980. L'évaluation de processus a abouti à un « Calendrier des activités » visant à créer des données de panel permettant de mesurer les progrès réalisés dans la mise en oeuvre de la police de proximité à l'aide d'une grille d'évaluation à six dimensions à des intervalles de cinq ans entre 1990 et 2010. L'évaluation des impacts, effectuée ex post facto, a utilisé un concept de recherche non-expérimental (observational design) dans le but d'analyser les impacts de différents modèles de police de proximité dans des zones comparables à travers les cinq villes étudiées. Les quartiers urbains, délimités par zone de code postal, ont ainsi été regroupés par l'intermédiaire d'une typologie réalisée à l'aide d'algorithmes d'apprentissage automatique (machine learning). Des algorithmes supervisés et non supervisés ont été utilisés sur les données à haute dimensionnalité relatives à la criminalité, à la structure socio-économique et démographique et au cadre bâti dans le but de regrouper les quartiers urbains les plus similaires dans des clusters. D'abord, les cartes auto-organisatrices (self-organizing maps) ont été utilisées dans le but de réduire la variance intra-cluster des variables contextuelles et de maximiser simultanément la variance inter-cluster des réponses au sondage. Ensuite, l'algorithme des forêts d'arbres décisionnels (random forests) a permis à la fois d'évaluer la pertinence de la typologie de quartier élaborée et de sélectionner les variables contextuelles clés afin de construire un modèle parcimonieux faisant un minimum d'erreurs de classification. Enfin, pour l'analyse des impacts, la méthode des appariements des coefficients de propension (propensity score matching) a été utilisée pour équilibrer les échantillons prétest-posttest en termes d'âge, de sexe et de niveau d'éducation des répondants au sein de chaque type de quartier ainsi identifié dans chacune des villes, avant d'effectuer un test statistique de la différence observée dans les indicateurs d'impacts. De plus, tous les résultats statistiquement significatifs ont été soumis à une analyse de sensibilité (sensitivity analysis) afin d'évaluer leur robustesse face à un biais potentiel dû à des covariables non observées. L'étude relève qu'au cours des quinze dernières années, les cinq services de police ont entamé des réformes majeures de leur organisation ainsi que de leurs stratégies opérationnelles et qu'ils ont noué des partenariats stratégiques afin de mettre en oeuvre la police de proximité. La typologie de quartier développée a abouti à une réduction de la variance intra-cluster des variables contextuelles et permet d'expliquer une partie significative de la variance inter-cluster des indicateurs d'impacts avant la mise en oeuvre du traitement. Ceci semble suggérer que les méthodes de géocomputation aident à équilibrer les covariables observées et donc à réduire les menaces relatives à la validité interne d'un concept de recherche non-expérimental. Enfin, l'analyse des impacts a révélé que le sentiment d'insécurité a diminué de manière significative pendant la période 2000-2005 dans les quartiers se trouvant à l'intérieur et autour des centres-villes de Berne et de Zurich. Ces améliorations sont assez robustes face à des biais dus à des covariables inobservées et covarient dans le temps et l'espace avec la mise en oeuvre de la police de proximité. L'hypothèse alternative envisageant que les diminutions observées dans le sentiment d'insécurité soient, partiellement, un résultat des interventions policières de proximité semble donc être aussi plausible que l'hypothèse nulle considérant l'absence absolue d'effet. Ceci, même si le concept de recherche non-expérimental mis en oeuvre ne peut pas complètement exclure la sélection et la régression à la moyenne comme explications alternatives. The current research project is both a process and impact evaluation of community policing in Switzerland's five major urban areas - Basel, Bern, Geneva, Lausanne, and Zurich. Community policing is both a philosophy and an organizational strategy that promotes a renewed partnership between the police and the community to solve problems of crime and disorder. The process evaluation data on police internal reforms were obtained through semi-structured interviews with key administrators from the five police departments as well as from police internal documents and additional public sources. The impact evaluation uses official crime records and census statistics as contextual variables as well as Swiss Crime Survey (SCS) data on fear of crime, perceptions of disorder, and public attitudes towards the police as outcome measures. The SCS is a standing survey instrument that has polled residents of the five urban areas repeatedly since the mid-1980s. The process evaluation produced a "Calendar of Action" to create panel data to measure community policing implementation progress over six evaluative dimensions in intervals of five years between 1990 and 2010. The impact evaluation, carried out ex post facto, uses an observational design that analyzes the impact of the different community policing models between matched comparison areas across the five cities. Using ZIP code districts as proxies for urban neighborhoods, geospatial data mining algorithms serve to develop a neighborhood typology in order to match the comparison areas. To this end, both unsupervised and supervised algorithms are used to analyze high-dimensional data on crime, the socio-economic and demographic structure, and the built environment in order to classify urban neighborhoods into clusters of similar type. In a first step, self-organizing maps serve as tools to develop a clustering algorithm that reduces the within-cluster variance in the contextual variables and simultaneously maximizes the between-cluster variance in survey responses. The random forests algorithm then serves to assess the appropriateness of the resulting neighborhood typology and to select the key contextual variables in order to build a parsimonious model that makes a minimum of classification errors. Finally, for the impact analysis, propensity score matching methods are used to match the survey respondents of the pretest and posttest samples on age, gender, and their level of education for each neighborhood type identified within each city, before conducting a statistical test of the observed difference in the outcome measures. Moreover, all significant results were subjected to a sensitivity analysis to assess the robustness of these findings in the face of potential bias due to some unobserved covariates. The study finds that over the last fifteen years, all five police departments have undertaken major reforms of their internal organization and operating strategies and forged strategic partnerships in order to implement community policing. The resulting neighborhood typology reduced the within-cluster variance of the contextual variables and accounted for a significant share of the between-cluster variance in the outcome measures prior to treatment, suggesting that geocomputational methods help to balance the observed covariates and hence to reduce threats to the internal validity of an observational design. Finally, the impact analysis revealed that fear of crime dropped significantly over the 2000-2005 period in the neighborhoods in and around the urban centers of Bern and Zurich. These improvements are fairly robust in the face of bias due to some unobserved covariate and covary temporally and spatially with the implementation of community policing. The alternative hypothesis that the observed reductions in fear of crime were at least in part a result of community policing interventions thus appears at least as plausible as the null hypothesis of absolutely no effect, even if the observational design cannot completely rule out selection and regression to the mean as alternative explanations.
Resumo:
Modeling concentration-response function became extremely popular in ecotoxicology during the last decade. Indeed, modeling allows determining the total response pattern of a given substance. However, reliable modeling is consuming in term of data, which is in contradiction with the current trend in ecotoxicology, which aims to reduce, for cost and ethical reasons, the number of data produced during an experiment. It is therefore crucial to determine experimental design in a cost-effective manner. In this paper, we propose to use the theory of locally D-optimal designs to determine the set of concentrations to be tested so that the parameters of the concentration-response function can be estimated with high precision. We illustrated this approach by determining the locally D-optimal designs to estimate the toxicity of the herbicide dinoseb on daphnids and algae. The results show that the number of concentrations to be tested is often equal to the number of parameters and often related to the their meaning, i.e. they are located close to the parameters. Furthermore, the results show that the locally D-optimal design often has the minimal number of support points and is not much sensitive to small changes in nominal values of the parameters. In order to reduce the experimental cost and the use of test organisms, especially in case of long-term studies, reliable nominal values may therefore be fixed based on prior knowledge and literature research instead of on preliminary experiments
Resumo:
OBJECTIVE: Intervention during the pre-psychotic period of illness holds the potential of delaying or even preventing the onset of a full-threshold disorder, or at least of reducing the impact of such a disorder if it does develop. The first step in realizing this aim was achieved more than 10 years ago with the development and validation of criteria for the identification of young people at ultra-high risk (UHR) of psychosis. Results of three clinical trials have been published that provide mixed support for the effectiveness of psychological and pharmacological interventions in preventing the onset of psychotic disorder. METHOD: The present paper describes a fourth study that has now been undertaken in which young people who met UHR criteria were randomized to one of three treatment groups: cognitive therapy plus risperidone (CogTher + Risp: n = 43); cognitive therapy plus placebo (CogTher + Placebo: n = 44); and supportive counselling + placebo (Supp + Placebo; n = 28). A fourth group of young people who did not agree to randomization were also followed up (monitoring: n = 78). Baseline characteristics of participants are provided. RESULTS AND CONCLUSION: The present study improves on the previous studies because treatment was provided for 12 months and the independent contributions of psychological and pharmacological treatments in preventing transition to psychosis in the UHR cohort and on levels of psychopathology and functioning can be directly compared. Issues associated with recruitment and randomization are discussed.