990 resultados para Single hard diffraction


Relevância:

20.00% 20.00%

Publicador:

Resumo:

BACKGROUND: The epithelial sodium channel (ENaC) is composed of three homologous subunits: alpha, beta, and gamma. Mutations in the Scnn1b and Scnn1g genes, which encode the beta and the gamma subunits of ENaC, cause a severe form of hypertension (Liddle syndrome). The contribution of genetic variants within the Scnn1a gene, which codes for the alpha subunit, has not been investigated. METHODS: We screened for mutations in the COOH termini of the alpha and beta subunits of ENaC. Blood from 184 individuals from 31 families participating in a study on the genetics of hypertension were analyzed. Exons 13 of Scnn1a and Scnn1b, which encode the second transmembrane segment and the COOH termini of alpha- and beta-ENaC, respectively, were amplified from pooled DNA samples of members of each family by PCR. Constant denaturant capillary electrophoresis (CDCE) was used to detect mutations in PCR products of the pooled DNA samples. RESULTS: The detection limit of CDCE for ENaC variants was 1%, indicating that all members of any family or up to 100 individuals can be analyzed in one CDCE run. CDCE profiles of the COOH terminus of alpha-ENaC in pooled family members showed that the 31 families belonged to four groups and identified families with genetic variants. Using this approach, we analyzed 31 rather than 184 samples. Individual CDCE analysis of members from families with different pooled CDCE profiles revealed five genotypes containing 1853G-->T and 1987A-->G polymorphisms. The presence of the mutations was confirmed by DNA sequencing. For the COOH terminus of beta-ENaC, only one family showed a different CDCE profile. Two members of this family (n = 5) were heterozygous at 1781C-->T (T594M). CONCLUSION: CDCE rapidly detects point mutations in these candidate disease genes.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Influenza vaccines are recommended for administration by the intramuscular route. However, many physicians use the subcutaneous route for patients receiving an oral anticoagulant because this route is thought to induce fewer hemorrhagic side effects. Our aim is to assess the safety of intramuscular administration of influenza vaccine in patients on oral anticoagulation therapy. Methods: Design: Randomised, controlled, single blinded, multi-centre clinical trial. Setting: 4 primary care practices in Barcelona, Spain. Participants: 229 patients on oral anticoagulation therapy eligible for influenza vaccine during the 20032004 season. Interventions: intramuscular administration of influença vaccine in the experimental group (129 patients) compared to subcutaneous administration in the control group (100 patients). Primary outcome: change in the circumference of the arm at the site of injection at 24 hours. Secondary outcomes: appearance of local reactions and pain at 24 hours and at 10 days; change in INR (International Normalized Ratio) at 24 hours and at 10 days. Analysis was by intention to treat using the 95% confidence intervals of the proportions or mean differences. Results: Baseline variables in the two groups were similar. No major side effects or major haemorrhage during the follow-up period were reported. No significant differences were observed in the primary outcome between the two groups. The appearance of local adverse reactions was more frequent in the subcutaneous administration group (37,4% vs. 17,4%, 95% confidence interval of the difference 8,2% to 31,8%). Conclusion: This study shows that the intramuscular administration route of influenza vaccine in patients on anticoagulant therapy does not have more side effects than the subcutaneous administration route

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The eccentric contraction mode was proposed to be the primary stimulus for optimum angle (angle at which peak torque occurs) shift. However, the training range of motion (or muscle excursion range) could be a stimulus as important. The aim of this study was to assess the influence of the training range of motion stimulus on the hamstring optimum length. It was hypothesised that performing a single set of concentric contractions beyond optimal length (seated at 80° of hip flexion) would lead to an immediate shift of the optimum angle to longer muscle length while performing it below (supine at 0° of hip flexion) would not provide any shift. Eleven male participants were assessed on an isokinetic dynamometer. In both positions, the test consisted of 30 consecutive knee flexions at 4.19 rad · s⁻¹. The optimum angle was significantly shifted by ∼15° in the direction of longer muscle length after the contractions at 80° of hip flexion, while a non-significant shift of 3° was found at 0°. The hamstring fatigability was not influenced by the hip position. It was concluded that the training range of motion seems to be a relevant stimulus for shifting the optimum angle to longer muscle length. Moreover, fatigue appears as a mechanism partly responsible for the observed shift.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Single-trial encounters with multisensory stimuli affect both memory performance and early-latency brain responses to visual stimuli. Whether and how auditory cortices support memory processes based on single-trial multisensory learning is unknown and may differ qualitatively and quantitatively from comparable processes within visual cortices due to purported differences in memory capacities across the senses. We recorded event-related potentials (ERPs) as healthy adults (n = 18) performed a continuous recognition task in the auditory modality, discriminating initial (new) from repeated (old) sounds of environmental objects. Initial presentations were either unisensory or multisensory; the latter entailed synchronous presentation of a semantically congruent or a meaningless image. Repeated presentations were exclusively auditory, thus differing only according to the context in which the sound was initially encountered. Discrimination abilities (indexed by d') were increased for repeated sounds that were initially encountered with a semantically congruent image versus sounds initially encountered with either a meaningless or no image. Analyses of ERPs within an electrical neuroimaging framework revealed that early stages of auditory processing of repeated sounds were affected by prior single-trial multisensory contexts. These effects followed from significantly reduced activity within a distributed network, including the right superior temporal cortex, suggesting an inverse relationship between brain activity and behavioural outcome on this task. The present findings demonstrate how auditory cortices contribute to long-term effects of multisensory experiences on auditory object discrimination. We propose a new framework for the efficacy of multisensory processes to impact both current multisensory stimulus processing and unisensory discrimination abilities later in time.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Työn tavoitteena oli tutkia Euroopan yhteisen maksualueen (Single European Payment Area, SEPA) kehitystä ja selvittää mitä hyötyä siitä on Metso-konsernin maksuliikenteelle. Teoreettisessa osassa tarkastellaan mistä tehokas maksuliikenne koostuu ja esitetään SEPA:n kehitys peilaamalla sitä Berger et al.:n (1996) teoriasta muodostettuihin implikaatioihin. Empiirinen osuus sisältää kuvailevan Case-tutkimuksen, joka toteutettiin teemahaastatteluilla. Sitä täydennettiin kvantitatiivisella tilisiirtokustannusaineistolla (Metso-konserni). Tulosten mukaan SEPA-kehitys on sekä positiivista että negatiivista: 1) Jos kaikki maksuliikenteen osapuolet kokevat saavansa hyötyä uudistuksista, toteutuu sosiaalinen tehokkuus, jolloin kehitys nopeutuu ja kustannukset laskevat. Tämä on nähtävissä Metso-konsernin tilisiirtokustannusten kehityksestä. 2) SEPA lisää pankkien kustannuksia. 3) Järjestelmien kehitys vähentää maksuliikenneriskiä 4) Maksualuekehitys on vaarassa hidastua, jos alhaiseksi koetun riskin ehkäisemisestäei haluta maksaa.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Työ tutkii yritysportaalin roolia organisaation tietojohtamisessa. Tutkimusongelman ratkaisemiseksi luodaan viitekehys, jossa yritysportaalin ja tietojohtamisen teoriat linkittyvät. Työn empiirisessä osassa viitekehys on pohjana case-yritykselle rakennettavalle yritysportaalille. Laadullinen tutkimus käsittää teoriaosuuden sekä osallistuvaan case-tutkimukseen perustuvan empiriaosuuden. Työn runko muodostuu kahden vastakkaisen tietojohtamisajattelun vuoropuhelusta, jotka ovat informaatioteknologiaan- ja strategiseen johtamiseen perustuvat näkökulmat. Toimivan tietojohtamismallin täytyy sisältää molemmat aspektit. Jokainen organisaatio tarvitsee informaation hallintaan liittyviä toiminnallisuuksia ja täten eksplisiittisen tiedon hallinta tietojärjestelmien avulla on onnistuneen tietojohtamisen kulmakiviä. Tätä perusinfrastruktuuria on mahdollista laajentaa hiljaisen tiedon hallintaan perustuvilla tietojohtamismenetelmillä. Työn ratkaisu näiden kahden näkemyksen, 'kovan' informaatioteknogiaan painottuvan sekä 'pehmeän' ihmisnäkökulman integrointiin, on yritysportaali. Työssä käytettävä yritysportaalin viitekehys rakentuu kolmeen päätoiminnallisuuteen; sisällönhallintaan, yhteistyöominaisuuksiin ja liiketoimintatiedon hallintaan. Työ todistaa yhteyden viitekehyksen sekä tietojohtamisen perusmallien, kuten tietojohtamisen prosessimallin sekä tietoympäristöjen välillä. Yritysportaali voi täten toimia, ei ainoastaan yksittäisten tietojohtamistyökalujen implementoinnissa, vaan tietojohtamisstrategian luomisen apuna tarjoten alustan tai 'katalyytin' kokonaisvaltaiselle tietojohtamiselle.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Background: In order to provide a cost-effective tool to analyse pharmacogenetic markers in malaria treatment, DNA microarray technology was compared with sequencing of polymerase chain reaction (PCR) fragments to detect single nucleotide polymorphisms (SNPs) in a larger number of samples. Methods: The microarray was developed to affordably generate SNP data of genes encoding the human cytochrome P450 enzyme family (CYP) and N-acetyltransferase-2 (NAT2) involved in antimalarial drug metabolisms and with known polymorphisms, i.e. CYP2A6, CYP2B6, CYP2C8, CYP2C9, CYP2C19, CYP2D6, CYP3A4, CYP3A5, and NAT2. Results: For some SNPs, i.e. CYP2A6*2, CYP2B6*5, CYP2C8*3, CYP2C9*3/*5, CYP2C19*3, CYP2D6*4 and NAT2*6/*7/*14, agreement between both techniques ranged from substantial to almost perfect (kappa index between 0.61 and 1.00), whilst for other SNPs a large variability from slight to substantial agreement (kappa index between 0.39 and 1.00) was found, e. g. CYP2D6*17 (2850C>T), CYP3A4*1B and CYP3A5*3. Conclusion: The major limit of the microarray technology for this purpose was lack of robustness and with a large number of missing data or with incorrect specificity.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Résumé Etude de la valeur pronostique de la biopsie du ganglion sentinelle dans une étude prospective monocentrique de 327 patients atteints de mélanome malin But II s'agit de confirmer la validité de la biopsie du ganglion sentinelle, d'en définir la morbidité, d'investiguer les facteurs prédictifs pour le statut du ganglion sentinelle ainsi que de déterminer les facteurs pronostiques pour la survie sans récidive et la survie spécifique liée à la maladie. Matériel et méthode D'octobre 1997 à décembre 2004, 327 patients consécutifs présentant un mélanome cutané primaire des membres, du tronc et de la tête, sans adénopathie clinique ni métastase à distance ont été inclus. La biopsie du ganglion sentinelle a été réalisée selon la triple technique (lymphoscintigraphie, colorant bleu vital et sonde de détection gamma). Les paramètres et la survie ont été évalués par différentes analyses de régression logistique multiple selon Cox et la survie évaluée selon Kaplan Meier. Résultats Vingt-trois pour cent des patients présentaient au moins un ganglion sentinelle métastatique, ce qui était associé de façon significative à l'épaisseur selon Breslow (p<0.001). Le taux de succès de la biopsie du ganglion sentinelle était de 99.1% et sa morbidité de 7.6%. Avec une durée médiane de suivi de 33 mois, la survie sans récidive à 5 ans était de 43% pour les patients avec un ganglion sentinelle positif et de 83.5% pour ceux avec un ganglion sentinelle négatif. La survie spécifique liée à la maladie à 5 ans était de 49% pour les patients avec un ganglion sentinelle positif et de 87.4% pour ceux avec un ganglion sentinelle négatif. Le taux de faux négatif de la biopsie du ganglion sentinelle était de 8.6%. L'analyse multivariée a démontré que la survie sans récidive était significativement péjorée par :l'épaisseur selon Breslow (RR=5.6, p<0.001), un ganglion sentinelle positif (RR=5.0, p<0.001), et le sexe masculin (RR=2.9, p=0.001). La survie spécifique liée à la maladie était significativement diminuée par : un ganglion sentinelle métastatique (RR=8.4, p<O.OOI), le sexe masculin (RR=6.1, p<0.001), l'épaisseur selon Breslow (RR=3.2, p=0.013), et la présence d'une ulcération (RR=2.6, p=0.015). Conclusion La biopsie du ganglion sentinelle est une procédure fiable avec une haute sensibilité (91.4%) et une faible morbidité (7.6%). L'épaisseur selon Breslow était le seul facteur prédictif significatif pour le statut du ganglion sentinelle. La survie sans récidive était péjorée selon un ordre décroissant par :l'épaisseur selon Breslow, un ganglion sentinelle métastatique, et le sexe masculin. De façon similaire la survie spécifique liée à la maladie était péjorée par : un ganglion sentinelle métastatique, le sexe masculin, l'épaisseur selon Breslow, et une ulcération. Ces données renforcent le statut du ganglion sentinelle en tant que puissant moyen pour évaluer le stade tumoral ainsi que le pronostic.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Les échantillons biologiques ne s?arrangent pas toujours en objets ordonnés (cristaux 2D ou hélices) nécessaires pour la microscopie électronique ni en cristaux 3D parfaitement ordonnés pour la cristallographie rayons X alors que de nombreux spécimens sont tout simplement trop << gros D pour la spectroscopie NMR. C?est pour ces raisons que l?analyse de particules isolées par la cryo-microscopie électronique est devenue une technique de plus en plus importante pour déterminer la structure de macromolécules. Néanmoins, le faible rapport signal-sur-bruit ainsi que la forte sensibilité des échantillons biologiques natifs face au faisceau électronique restent deux parmi les facteurs limitant la résolution. La cryo-coloration négative est une technique récemment développée permettant l?observation des échantillons biologiques avec le microscope électronique. Ils sont observés à l?état vitrifié et à basse température, en présence d?un colorant (molybdate d?ammonium). Les avantages de la cryo-coloration négative sont étudiés dans ce travail. Les résultats obtenus révèlent que les problèmes majeurs peuvent êtres évités par l?utilisation de cette nouvelle technique. Les échantillons sont représentés fidèlement avec un SNR 10 fois plus important que dans le cas des échantillons dans l?eau. De plus, la comparaison de données obtenues après de multiples expositions montre que les dégâts liés au faisceau électronique sont réduits considérablement. D?autre part, les résultats exposés mettent en évidence que la technique est idéale pour l?analyse à haute résolution de macromolécules biologiques. La solution vitrifiée de molybdate d?ammonium entourant l?échantillon n?empêche pas l?accès à la structure interne de la protéine. Finalement, plusieurs exemples d?application démontrent les avantages de cette technique nouvellement développée.<br/><br/>Many biological specimens do not arrange themselves in ordered assemblies (tubular or flat 2D crystals) suitable for electron crystallography, nor in perfectly ordered 3D crystals for X-ray diffraction; many other are simply too large to be approached by NMR spectroscopy. Therefore, single-particles analysis has become a progressively more important technique for structural determination of large isolated macromolecules by cryo-electron microscopy. Nevertheless, the low signal-to-noise ratio and the high electron-beam sensitivity of biological samples remain two main resolution-limiting factors, when the specimens are observed in their native state. Cryo-negative staining is a recently developed technique that allows the study of biological samples with the electron microscope. The samples are observed at low temperature, in the vitrified state, but in presence of a stain (ammonium molybdate). In the present work, the advantages of this novel technique are investigated: it is shown that cryo-negative staining can generally overcome most of the problems encountered with cryo-electron microscopy of vitrified native suspension of biological particles. The specimens are faithfully represented with a 10-times higher SNR than in the case of unstained samples. Beam-damage is found to be considerably reduced by comparison of multiple-exposure series of both stained and unstained samples. The present report also demonstrates that cryo-negative staining is capable of high- resolution analysis of biological macromolecules. The vitrified stain solution surrounding the sample does not forbid the access to the interna1 features (ie. the secondary structure) of a protein. This finding is of direct interest for the structural biologist trying to combine electron microscopy and X-ray data. developed electron microscopy technique. Finally, several application examples demonstrate the advantages of this newly

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In this study, equations for the calculation of erosion wear caused by ash particles on convective heat exchanger tubes of steam boilers are presented. Anew, three-dimensional test arrangement was used in the testing of the erosion wear of convective heat exchanger tubes of steam boilers. When using the sleeve-method, three different tube materials and three tube constructions could be tested. New results were obtained from the analyses. The main mechanisms of erosionwear phenomena and erosion wear as a function of collision conditions and material properties have been studied. Properties of fossil fuels have also been presented. When burning solid fuels, such as pulverized coal and peat in steam boilers, most of the ash is entrained by the flue gas in the furnace. In bubbling andcirculating fluidized bed boilers, particle concentration in the flue gas is high because of bed material entrained in the flue gas. Hard particles, such as sharp edged quartz crystals, cause erosion wear when colliding on convective heat exchanger tubes and on the rear wall of the steam boiler. The most important ways to reduce erosion wear in steam boilers is to keep the velocity of the flue gas moderate and prevent channelling of the ash flow in a certain part of the cross section of the flue gas channel, especially near the back wall. One can do this by constructing the boiler with the following components. Screen plates can beused to make the velocity and ash flow distributions more even at the cross-section of the channel. Shield plates and plate type constructions in superheaters can also be used. Erosion testing was conducted with three types of tube constructions: a one tube row, an inline tube bank with six tube rows, and a staggered tube bank with six tube rows. Three flow velocities and two particle concentrations were used in the tests, which were carried out at room temperature. Three particle materials were used: quartz, coal ash and peat ash particles. Mass loss, diameter loss and wall thickness loss measurements of the test sleeves were taken. Erosion wear as a function of flow conditions, tube material and tube construction was analyzed by single-variable linear regression analysis. In developing the erosion wear calculation equations, multi-variable linear regression analysis was used. In the staggered tube bank, erosion wear had a maximum value in a tube row 2 and a local maximum in row 5. In rows 3, 4 and 6, the erosion rate was low. On the other hand, in the in-line tube bank the minimum erosion rate occurred in tube row 2 and in further rows the erosion had an increasing value, so that in a six row tube bank, the maximum value occurred in row 6.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Multisensory memory traces established via single-trial exposures can impact subsequent visual object recognition. This impact appears to depend on the meaningfulness of the initial multisensory pairing, implying that multisensory exposures establish distinct object representations that are accessible during later unisensory processing. Multisensory contexts may be particularly effective in influencing auditory discrimination, given the purportedly inferior recognition memory in this sensory modality. The possibility of this generalization and the equivalence of effects when memory discrimination was being performed in the visual vs. auditory modality were at the focus of this study. First, we demonstrate that visual object discrimination is affected by the context of prior multisensory encounters, replicating and extending previous findings by controlling for the probability of multisensory contexts during initial as well as repeated object presentations. Second, we provide the first evidence that single-trial multisensory memories impact subsequent auditory object discrimination. Auditory object discrimination was enhanced when initial presentations entailed semantically congruent multisensory pairs and was impaired after semantically incongruent multisensory encounters, compared to sounds that had been encountered only in a unisensory manner. Third, the impact of single-trial multisensory memories upon unisensory object discrimination was greater when the task was performed in the auditory vs. visual modality. Fourth, there was no evidence for correlation between effects of past multisensory experiences on visual and auditory processing, suggestive of largely independent object processing mechanisms between modalities. We discuss these findings in terms of the conceptual short term memory (CSTM) model and predictive coding. Our results suggest differential recruitment and modulation of conceptual memory networks according to the sensory task at hand.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

There is a widespread agreement from patient and professional organisations alike that the safety of stem cell therapeutics is of paramount importance, particularly for ex vivo autologous gene therapy. Yet current technology makes it difficult to thoroughly evaluate the behaviour of genetically corrected stem cells before they are transplanted. To address this, we have developed a strategy that permits transplantation of a clonal population of genetically corrected autologous stem cells that meet stringent selection criteria and the principle of precaution. As a proof of concept, we have stably transduced epidermal stem cells (holoclones) obtained from a patient suffering from recessive dystrophic epidermolysis bullosa. Holoclones were infected with self-inactivating retroviruses bearing a COL7A1 cDNA and cloned before the progeny of individual stem cells were characterised using a number of criteria. Clonal analysis revealed a great deal of heterogeneity among transduced stem cells in their capacity to produce functional type VII collagen (COLVII). Selected transduced stem cells transplanted onto immunodeficient mice regenerated a non-blistering epidermis for months and produced a functional COLVII. Safety was assessed by determining the sites of proviral integration, rearrangements and hit genes and by whole-genome sequencing. The progeny of the selected stem cells also had a diploid karyotype, was not tumorigenic and did not disseminate after long-term transplantation onto immunodeficient mice. In conclusion, a clonal strategy is a powerful and efficient means of by-passing the heterogeneity of a transduced stem cell population. It guarantees a safe and homogenous medicinal product, fulfilling the principle of precaution and the requirements of regulatory affairs. Furthermore, a clonal strategy makes it possible to envision exciting gene-editing technologies like zinc finger nucleases, TALENs and homologous recombination for next-generation gene therapy.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Sudoku problems are some of the most known and enjoyed pastimes, with a never diminishing popularity, but, for the last few years those problems have gone from an entertainment to an interesting research area, a twofold interesting area, in fact. On the one side Sudoku problems, being a variant of Gerechte Designs and Latin Squares, are being actively used for experimental design, as in [8, 44, 39, 9]. On the other hand, Sudoku problems, as simple as they seem, are really hard structured combinatorial search problems, and thanks to their characteristics and behavior, they can be used as benchmark problems for refining and testing solving algorithms and approaches. Also, thanks to their high inner structure, their study can contribute more than studies of random problems to our goal of solving real-world problems and applications and understanding problem characteristics that make them hard to solve. In this work we use two techniques for solving and modeling Sudoku problems, namely, Constraint Satisfaction Problem (CSP) and Satisfiability Problem (SAT) approaches. To this effect we define the Generalized Sudoku Problem (GSP), where regions can be of rectangular shape, problems can be of any order, and solution existence is not guaranteed. With respect to the worst-case complexity, we prove that GSP with block regions of m rows and n columns with m = n is NP-complete. For studying the empirical hardness of GSP, we define a series of instance generators, that differ in the balancing level they guarantee between the constraints of the problem, by finely controlling how the holes are distributed in the cells of the GSP. Experimentally, we show that the more balanced are the constraints, the higher the complexity of solving the GSP instances, and that GSP is harder than the Quasigroup Completion Problem (QCP), a problem generalized by GSP. Finally, we provide a study of the correlation between backbone variables – variables with the same value in all the solutions of an instance– and hardness of GSP.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In this paper we provide a new method to generate hard k-SAT instances. We incrementally construct a high girth bipartite incidence graph of the k-SAT instance. Having high girth assures high expansion for the graph, and high expansion implies high resolution width. We have extended this approach to generate hard n-ary CSP instances and we have also adapted this idea to increase the expansion of the system of linear equations used to generate XORSAT instances, being able to produce harder satisfiable instances than former generators.