760 resultados para Measure of Noncompactness


Relevância:

90.00% 90.00%

Publicador:

Resumo:

Organisatorisen luottamuksen tutkimuksessa luottamus nähdään yleensä henkilöiden välisenä ilmiönä kuten työntekijän luottamuksena työtovereihin, esimieheen tai lähimpään johtoon. Organisatorisessa luottamuksessa on kuitenkin myös ei-henkilöityvä ulottuvuus, ns. institutionaalinen luottamus. Tähän mennessä vain muutamat tutkijat ovat omissa tutkimuksissaan käyttäneet myös institutionaalista luottamusta osana organisatorista luottamusta. Tämän työn tavoitteena on kehittää institutionaalisen luottamuksen käsitettä sekä mittari sen havainnoimiseksi organisaatioympäristössä. Kehitysprosessi koostui kolmesta vaiheesta. Ensimmäisessä vaiheessa kehitettiin mittariin tulevia väittämiä sekä arvioitiin sisällön validiteetti. Toinen vaihe käsitti aineiston keruun, väittämien karsimisen sekä vaihtoehtoisten mallien vertailun. Kolmannessa vaiheessa arvioitiin rakennevaliditeetti sekä reliabiliteetti. Työn empiirinen osatoteutettiin internet-kyselynä aikuisopiskelijoiden keskuudessa. Aineiston analysoinnissa käytettiin pääkomponenttianalyysiä sekä konfirmatorista faktorianalyysiä. Institutionaalinen luottamus muodostuu kahdesta ulottuvuudesta: kyvykkyys ja oikeudenmukaisuus. Kyvykkyys muodostuu viidestä alakomponentista: operatiivisen toiminnan organisointi, organisaation pysyvyys, kyvykkyys liiketoiminnan ja ihmisten johtamisessa, teknologinen luotettavuus sekä kilpailukyky. Oikeudenmukaisuus puolestaan muodostuu HRM-käytännöistä, organisaatiossa vallitsevasta reilun pelin hengestä sekä kommunikaatiosta. Lopullinen mittari kyvykkyydelle käsittää 18 väittämää ja oikeudenmukaisuudelle 13 väittämää. Työssä kehitetty mittari mahdollistaa organisatorisen luottamuksen entistä paremman ja luotettavamman mittaamisen. Tutkijan tietämyksen mukaan tämä onensimmäinen kokonaisvaltainen mittari institutionaalisen luottamuksen mittaamiseksi.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

The disintegration of recovered paper is the first operation in the preparation of recycled pulp. It is known that the defibering process follows a first order kinetics from which it is possible to obtain the disintegration kinetic constant (KD) by means of different ways. The disintegration constant can be obtained from the Somerville index results (%lsv and from the dissipated energy per volume unit (Ss). The %slv is related to the quantity of non-defibrated paper, as a measure of the non-disintegrated fiber residual (percentage of flakes), which is expressed in disintegration time units. In this work, disintegration kinetics from recycled coated paper has been evaluated, working at 20 revise rotor speed and for different fiber consistency (6, 8, 10, 12 and 14%). The results showed that the values of experimental disintegration kinetic constant, Ko, through the analysis of Somerville index, as function of time. Increased, the disintegration time was drastically reduced. The calculation of the disintegration kinetic constant (modelled Ko), extracted from the Rayleigh’s dissipation function, showed a good correlation with the experimental values using the evolution of the Somerville index or with the dissipated energy

Relevância:

90.00% 90.00%

Publicador:

Resumo:

The flexibility of different regions of HIV-1 protease was examined by using a database consisting of 73 X-ray structures that differ in terms of sequence, ligands or both. The root-mean-square differences of the backbone for the set of structures were shown to have the same variation with residue number as those obtained from molecular dynamics simulations, normal mode analyses and X-ray B-factors. This supports the idea that observed structural changes provide a measure of the inherent flexibility of the protein, although specific interactions between the protease and the ligand play a secondary role. The results suggest that the potential energy surface of the HIV-1 protease is characterized by many local minima with small energetic differences, some of which are sampled by the different X-ray structures of the HIV-1 protease complexes. Interdomain correlated motions were calculated from the structural fluctuations and the results were also in agreement with molecular dynamics simulations and normal mode analyses. Implications of the results for the drug-resistance engendered by mutations are discussed briefly.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

MOTIVATION: Comparative analyses of gene expression data from different species have become an important component of the study of molecular evolution. Thus methods are needed to estimate evolutionary distances between expression profiles, as well as a neutral reference to estimate selective pressure. Divergence between expression profiles of homologous genes is often calculated with Pearson's or Euclidean distance. Neutral divergence is usually inferred from randomized data. Despite being widely used, neither of these two steps has been well studied. Here, we analyze these methods formally and on real data, highlight their limitations and propose improvements. RESULTS: It has been demonstrated that Pearson's distance, in contrast to Euclidean distance, leads to underestimation of the expression similarity between homologous genes with a conserved uniform pattern of expression. Here, we first extend this study to genes with conserved, but specific pattern of expression. Surprisingly, we find that both Pearson's and Euclidean distances used as a measure of expression similarity between genes depend on the expression specificity of those genes. We also show that the Euclidean distance depends strongly on data normalization. Next, we show that the randomization procedure that is widely used to estimate the rate of neutral evolution is biased when broadly expressed genes are abundant in the data. To overcome this problem, we propose a novel randomization procedure that is unbiased with respect to expression profiles present in the datasets. Applying our method to the mouse and human gene expression data suggests significant gene expression conservation between these species. CONTACT: marc.robinson-rechavi@unil.ch; sven.bergmann@unil.ch SUPPLEMENTARY INFORMATION: Supplementary data are available at Bioinformatics online.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Background: At present, it is complicated to use screening trials to determine the optimal age intervals and periodicities of breast cancer early detection. Mathematical models are an alternative that has been widely used. The aim of this study was to estimate the effect of different breast cancer early detection strategies in Catalonia (Spain), in terms of breast cancer mortality reduction (MR) and years of life gained (YLG), using the stochastic models developed by Lee and Zelen (LZ). Methods: We used the LZ model to estimate the cumulative probability of death for a cohort exposed to different screening strategies after T years of follow-up. We also obtained the cumulative probability of death for a cohort with no screening. These probabilities were used to estimate the possible breast cancer MR and YLG by age, period and cohort of birth. The inputs of the model were: incidence of, mortality from and survival after breast cancer, mortality from other causes, distribution of breast cancer stages at diagnosis and sensitivity of mammography. The outputs were relative breast cancer MR and YLG. Results: Relative breast cancer MR varied from 20% for biennial exams in the 50 to 69 age interval to 30% for annual exams in the 40 to 74 age interval. When strategies differ in periodicity but not in the age interval of exams, biennial screening achieved almost 80% of the annual screening MR. In contrast to MR, the effect on YLG of extending screening from 69 to 74 years of age was smaller than the effect of extending the screening from 50 to 45 or 40 years. Conclusion: In this study we have obtained a measure of the effect of breast cancer screening in terms of mortality and years of life gained. The Lee and Zelen mathematical models have been very useful for assessing the impact of different modalities of early detection on MR and YLG in Catalonia (Spain).

Relevância:

90.00% 90.00%

Publicador:

Resumo:

OBJECTIVES: The goal of this study was to assess the clinical usefulness of the emotional symptoms (Emo) and externalizing problems (Ext) scales compared with the Total score on the Health of the Nation Outcome Scales for Children and Adolescents (HoNOSCA). METHODS: The HoNOSCA was rated at admission and discharge for 260 adolescent inpatients. The primary outcomes assessed were (a) the sensitivity of the 3 HoNOSCA scores to clinical improvement; and (b) the between diagnoses discriminative value of these scores. RESULTS: Analyses of variances [2 (time: admission vs. discharge) ×5 (diagnostic groups)] revealed a main effect of time for the 3 scores, a main effect of the diagnostic group for the Total and Ext scores, and an interaction effect between time and diagnosis for the Emo score. A moderate correlation was observed between the change in Ext and Emo scores between admission and discharge. DISCUSSION: These 2 new scales of the HoNOSCA demonstrated good clinical utility and the ability to assess different aspects of clinical improvements. A significant discriminative value of both scores was observed. SIGNIFICANT OUTCOMES: The clinical utility of the 2 new scales on the HoNOSCA was established. These 2 new scales provided a sensitive measure of clinical outcome for assessing improvement between admission and discharge on a psychiatric inpatient unit for adolescents, regardless of diagnostic group, and captured additional information about clinical improvements. Adolescents with psychosis and conduct disorders presented with higher externalizing symptoms than those with other disorders, as rated on the HoNOSCA, at admission and discharge. The Emo score differentiated between clinical improvement in patients with psychosis versus eating disorders. LIMITATIONS: The sample in this study represented a homogeneous population of adolescent inpatients, so that further research is needed before these findings can be generalized to outpatients. In addition, the small number of patients in some diagnostic groups did not allow for their inclusion in some of the statistical analyses.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Emerging adulthood is a period of life transition, in which youths are no longer adolescents but have not yet reached full adulthood. Measuring emerging adulthood is crucial because of its association with psychopathology and risky behaviors such as substance use. Unfortunately, the only validated scale for such measurement has a long format (Inventory of Dimensions of Emerging Adulthood [IDEA]-31 items). This study aimed to test whether a shorter form yields satisfactory results without substantial loss of information among a sample of young Swiss men. Data from the longitudinal Cohort Study on Substance Use Risk Factors were used (N = 5,049). IDEA, adulthood markers (e.g., parenthood or financial independence), and risk factors (i.e., substance use and mental health issues) were assessed. The results showed that an 8-item, short-form scale (IDEA-8) with four factors (experimentation, negativity, identity exploration, and feeling in between) returned satisfactory results, including good psychometric properties, high convergence with the initial scale, and strong empirical validity. This study was a step toward downsizing a measure of emerging adulthood. Indeed, this 8-item short form is a good alternative to the 31-item long form and could be more convenient for surveys with constraints on questionnaire length. Moreover, it should help health care practitioners in identifying at-risk populations to prevent and treat risky behaviors.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Background: Studies conducted internationally confirm that child sexual abuse is a much more widespread problem than previously thought, with even the lowest prevalence rates including a large number of victims that need to be taken into account. Objective: To carry out a meta-analysis of the prevalence of child sexual abuse in order to establish an overall international figure. Methods: Studies were retrieved from various electronic databases. The measure of interest was the prevalence of abuse reported in each article, these values being combined via a random effects model. A detailed analysis was conducted of the effects of various moderator variables. Results: Sixty-five articles covering 22 countries were included. The analysis showed that 7.9% of men (7.4% without outliers) and 19.7% of women (19.2% without outliers) had suffered some form of sexual abuse prior to the age of eighteen. Conclusions: The results of the present meta-analysis indicate that child sexual abuse is a serious problem in the countries analysed.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

We propose a simple rheological model to describe the thixotropic behavior of paints, since the classical hysteresis area, which is usually used, is not enough to evaluate thixotropy. The model is based on the assumption that viscosity is a direct measure of the structural level of the paint. The model depends on two equations: the Cross-Carreau equation to describe the equilibrium viscosity and a second order kinetic equation to express the time dependence of viscosity. Two characteristic thixotropic times are differentiated: one for the net structure breakdown, which is defined as a power law function of shear rate, and an other for the net structure buildup, which is not dependent on the shear rate. The knowledge of both kinetic processes can be used to improve the quality and applicability of paints. Five representative commercial protective marine paints are tested. They are based on chlorinated rubber, acrylic, alkyd, vinyl, and epoxy resins. The temperature dependence of the rheological behavior is also studied with the temperature ranging from 5 ºC to 35 ºC. It is found that the paints exhibit both shear thinning and thixotropic behavior. The model fits satisfactorily the thixotropy of the studied paints. It is also able to predict the thixotropy dependence on temperature. Both viscosity and the degree of thixotropy increase as the temperature decreases.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Objective: Imipenem is a broad spectrum antibiotic used to treat severe infections in critically ill patients. Imipenem pharmacokinetics (PK) was evaluated in a cohort of neonates treated in the Neonatal Intensive Care Unit of the Lausanne University Hospital. The objective of our study was to identify key demographic and clinical factors influencing imipenem exposure in this population. Method: PK data from neonates and infants with at least one imipenem concentration measured between 2002 and 2013 were analyzed applying population PK modeling methods. Measurement of plasma concentrations were performed upon the decision of the physician within the frame of a therapeutic drug monitoring (TDM) programme. Effects of demographic (sex, body weight, gestational age, postnatal age) and clinical factors (serum creatinine as a measure of kidney function; co-administration of furosemide, spironolactone, hydrochlorothiazide, vancomycin, metronidazole and erythromycin) on imipenem PK were explored. Model-based simulations were performed (with a median creatinine value of 46 μmol/l) to compare various dosing regimens with respect to their ability to maintain drug levels above predefined minimum inhibitory concentrations (MIC) for at least 40 % of the dosing interval. Results: A total of 144 plasma samples was collected in 68 neonates and infants, predominantly preterm newborns, with median gestational age of 27 weeks (24 - 41 weeks) and postnatal age of 21 days (2 - 153 days). A two-compartment model best characterized imipenem disposition. Actual body weight exhibited the greatest impact on PK parameters, followed by age (gestational age and postnatal age) and serum creatinine on clearance. They explain 19%, 9%, 14% and 9% of the interindividual variability in clearance respectively. Model-based simulations suggested that 15 mg/kg every 12 hours maintain drug concentrations over a MIC of 2 mg/l for at least 40% of the dosing interval during the first days of life, whereas neonates older than 14 days of life required a dose of 20 mg/kg every 12 hours. Conclusion: Dosing strategies based on body weight and post-natal age are recommended for imipenem in all critically ill neonates and infants. Most current guidelines seem adequate for newborns and TDM should be restricted to some particular clinical situations.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Neuronal networks in vitro are prominent systems to study the development of connections in living neuronal networks and the interplay between connectivity, activity and function. These cultured networks show a rich spontaneous activity that evolves concurrently with the connectivity of the underlying network. In this work we monitor the development of neuronal cultures, and record their activity using calcium fluorescence imaging. We use spectral analysis to characterize global dynamical and structural traits of the neuronal cultures. We first observe that the power spectrum can be used as a signature of the state of the network, for instance when inhibition is active or silent, as well as a measure of the network's connectivity strength. Second, the power spectrum identifies prominent developmental changes in the network such as GABAA switch. And third, the analysis of the spatial distribution of the spectral density, in experiments with a controlled disintegration of the network through CNQX, an AMPA-glutamate receptor antagonist in excitatory neurons, reveals the existence of communities of strongly connected, highly active neurons that display synchronous oscillations. Our work illustrates the interest of spectral analysis for the study of in vitro networks, and its potential use as a network-state indicator, for instance to compare healthy and diseased neuronal networks.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

This paper analyses the effects of manipulating the cognitive complexity of L2 oral tasks on language production. It specifically focuses on self-repairs, which are taken as a measure of accuracy since they denote both attention to form and an attempt at being accurate. By means of a repeated measures de- sign, 42 lower-intermediate students were asked to perform three different tasks types (a narrative, and instruction-giving task, and a decision-making task) for which two degrees of cognitive complexity were established. The narrative task was manipulated along +/− Here-and-Now, an instruction-giving task ma- nipulated along +/− elements, and the decision-making task which is manipu- lated along +/− reasoning demands. Repeated measures ANOVAs are used for the calculation of differences between degrees of complexity and among task types. One-way ANOVA are used to detect potential differences between low- proficiency and high-proficiency participants. Results show an overall effect of Task Complexity on self-repairs behavior across task types, with different be- haviors existing among the three task types. No differences are found between the self-repair behavior between low and high proficiency groups. Results are discussed in the light of theories of cognition and L2 performance (Robin- son 2001a, 2001b, 2003, 2005, 2007), L1 and L2 language production models (Levelt 1989, 1993; Kormos 2000, 2006), and attention during L2 performance (Skehan 1998; Robinson, 2002).

Relevância:

90.00% 90.00%

Publicador:

Resumo:

The aim of this research is to examine the psychometric properties of a Spanish version of the Human System Audit transformational leadership short-scale (HSA-TFL-ES). It is based on the concept of Bass developed in 1985. The HSA-TFL is a part of the wider Human System Audit frame. We analyzed the HSA-TFL-ES in five different samples with a total number of 1,718 workers at five sectors. Exploratory Factor Analysis corroborated a single factor in all samples that accounted for 66% to 73% of variance. The internal consistency in all samples was good (α = .92 - .95). Evidence was found for the convergent validity of the HSA-TFL-ES and the Multifactor Leadership Questionnaire. These results suggested that the HSA-TFL short-scale is a psychometrically sound measure of this construct and can be used for a combined and first overall measurement.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Atherosclerosis is a chronic cardiovascular disease that involves the thicken¬ing of the artery walls as well as the formation of plaques (lesions) causing the narrowing of the lumens, in vessels such as the aorta, the coronary and the carotid arteries. Magnetic resonance imaging (MRI) is a promising modality for the assessment of atherosclerosis, as it is a non-invasive and patient-friendly procedure that does not use ionizing radiation. MRI offers high soft tissue con¬trast already without the need of intravenous contrast media; while modifica¬tion of the MR pulse sequences allows for further adjustment of the contrast for specific diagnostic needs. As such, MRI can create angiographic images of the vessel lumens to assess stenoses at the late stage of the disease, as well as blood flow-suppressed images for the early investigation of the vessel wall and the characterization of the atherosclerotic plaques. However, despite the great technical progress that occurred over the past two decades, MRI is intrinsically a low sensitive technique and some limitations still exist in terms of accuracy and performance. A major challenge for coronary artery imaging is respiratory motion. State- of-the-art diaphragmatic navigators rely on an indirect measure of motion, per¬form a ID correction, and have long and unpredictable scan time. In response, self-navigation (SM) strategies have recently been introduced that offer 100% scan efficiency and increased ease of use. SN detects respiratory motion di¬rectly from the image data obtained at the level of the heart, and retrospectively corrects the same data before final image reconstruction. Thus, SN holds po-tential for multi-dimensional motion compensation. To this regard, this thesis presents novel SN methods that estimate 2D and 3D motion parameters from aliased sub-images that are obtained from the same raw data composing the final image. Combination of all corrected sub-images produces a final image with reduced motion artifacts for the visualization of the coronaries. The first study (section 2.2, 2D Self-Navigation with Compressed Sensing) consists of a method for 2D translational motion compensation. Here, the use of com- pressed sensing (CS) reconstruction is proposed and investigated to support motion detection by reducing aliasing artifacts. In healthy human subjects, CS demonstrated an improvement in motion detection accuracy with simula¬tions on in vivo data, while improved coronary artery visualization was demon¬strated on in vivo free-breathing acquisitions. However, the motion of the heart induced by respiration has been shown to occur in three dimensions and to be more complex than a simple translation. Therefore, the second study (section 2.3,3D Self-Navigation) consists of a method for 3D affine motion correction rather than 2D only. Here, different techniques were adopted to reduce background signal contribution in respiratory motion tracking, as this can be adversely affected by the static tissue that surrounds the heart. The proposed method demonstrated to improve conspicuity and vi¬sualization of coronary arteries in healthy and cardiovascular disease patient cohorts in comparison to a conventional ID SN method. In the third study (section 2.4, 3D Self-Navigation with Compressed Sensing), the same tracking methods were used to obtain sub-images sorted according to the respiratory position. Then, instead of motion correction, a compressed sensing reconstruction was performed on all sorted sub-image data. This process ex¬ploits the consistency of the sorted data to reduce aliasing artifacts such that the sub-image corresponding to the end-expiratory phase can directly be used to visualize the coronaries. In a healthy volunteer cohort, this strategy improved conspicuity and visualization of the coronary arteries when compared to a con¬ventional ID SN method. For the visualization of the vessel wall and atherosclerotic plaques, the state- of-the-art dual inversion recovery (DIR) technique is able to suppress the signal coming from flowing blood and provide positive wall-lumen contrast. How¬ever, optimal contrast may be difficult to obtain and is subject to RR variability. Furthermore, DIR imaging is time-inefficient and multislice acquisitions may lead to prolonged scanning times. In response and as a fourth study of this thesis (chapter 3, Vessel Wall MRI of the Carotid Arteries), a phase-sensitive DIR method has been implemented and tested in the carotid arteries of a healthy volunteer cohort. By exploiting the phase information of images acquired after DIR, the proposed phase-sensitive method enhances wall-lumen contrast while widens the window of opportunity for image acquisition. As a result, a 3-fold increase in volumetric coverage is obtained at no extra cost in scanning time, while image quality is improved. In conclusion, this thesis presented novel methods to address some of the main challenges for MRI of atherosclerosis: the suppression of motion and flow artifacts for improved visualization of vessel lumens, walls and plaques. Such methods showed to significantly improve image quality in human healthy sub¬jects, as well as scan efficiency and ease-of-use of MRI. Extensive validation is now warranted in patient populations to ascertain their diagnostic perfor¬mance. Eventually, these methods may bring the use of atherosclerosis MRI closer to the clinical practice. Résumé L'athérosclérose est une maladie cardiovasculaire chronique qui implique le épaississement de la paroi des artères, ainsi que la formation de plaques (lé¬sions) provoquant le rétrécissement des lumières, dans des vaisseaux tels que l'aorte, les coronaires et les artères carotides. L'imagerie par résonance magné¬tique (IRM) est une modalité prometteuse pour l'évaluation de l'athérosclérose, car il s'agit d'une procédure non-invasive et conviviale pour les patients, qui n'utilise pas des rayonnements ionisants. L'IRM offre un contraste des tissus mous très élevé sans avoir besoin de médias de contraste intraveineux, tan¬dis que la modification des séquences d'impulsions de RM permet en outre le réglage du contraste pour des besoins diagnostiques spécifiques. À ce titre, l'IRM peut créer des images angiographiques des lumières des vaisseaux pour évaluer les sténoses à la fin du stade de la maladie, ainsi que des images avec suppression du flux sanguin pour une première enquête des parois des vais¬seaux et une caractérisation des plaques d'athérosclérose. Cependant, malgré les grands progrès techniques qui ont eu lieu au cours des deux dernières dé¬cennies, l'IRM est une technique peu sensible et certaines limitations existent encore en termes de précision et de performance. Un des principaux défis pour l'imagerie de l'artère coronaire est le mou¬vement respiratoire. Les navigateurs diaphragmatiques de pointe comptent sur une mesure indirecte de mouvement, effectuent une correction 1D, et ont un temps d'acquisition long et imprévisible. En réponse, les stratégies d'auto- navigation (self-navigation: SN) ont été introduites récemment et offrent 100% d'efficacité d'acquisition et une meilleure facilité d'utilisation. Les SN détectent le mouvement respiratoire directement à partir des données brutes de l'image obtenue au niveau du coeur, et rétrospectivement corrigent ces mêmes données avant la reconstruction finale de l'image. Ainsi, les SN détiennent un poten¬tiel pour une compensation multidimensionnelle du mouvement. A cet égard, cette thèse présente de nouvelles méthodes SN qui estiment les paramètres de mouvement 2D et 3D à partir de sous-images qui sont obtenues à partir des mêmes données brutes qui composent l'image finale. La combinaison de toutes les sous-images corrigées produit une image finale pour la visualisation des coronaires ou les artefacts du mouvement sont réduits. La première étude (section 2.2,2D Self-Navigation with Compressed Sensing) traite d'une méthode pour une compensation 2D de mouvement de translation. Ici, on étudie l'utilisation de la reconstruction d'acquisition comprimée (compressed sensing: CS) pour soutenir la détection de mouvement en réduisant les artefacts de sous-échantillonnage. Chez des sujets humains sains, CS a démontré une amélioration de la précision de la détection de mouvement avec des simula¬tions sur des données in vivo, tandis que la visualisation de l'artère coronaire sur des acquisitions de respiration libre in vivo a aussi été améliorée. Pourtant, le mouvement du coeur induite par la respiration se produit en trois dimensions et il est plus complexe qu'un simple déplacement. Par conséquent, la deuxième étude (section 2.3, 3D Self-Navigation) traite d'une méthode de cor¬rection du mouvement 3D plutôt que 2D uniquement. Ici, différentes tech¬niques ont été adoptées pour réduire la contribution du signal du fond dans le suivi de mouvement respiratoire, qui peut être influencé négativement par le tissu statique qui entoure le coeur. La méthode proposée a démontré une amélioration, par rapport à la procédure classique SN de correction 1D, de la visualisation des artères coronaires dans le groupe de sujets sains et des pa¬tients avec maladies cardio-vasculaires. Dans la troisième étude (section 2.4,3D Self-Navigation with Compressed Sensing), les mêmes méthodes de suivi ont été utilisées pour obtenir des sous-images triées selon la position respiratoire. Au lieu de la correction du mouvement, une reconstruction de CS a été réalisée sur toutes les sous-images triées. Cette procédure exploite la cohérence des données pour réduire les artefacts de sous- échantillonnage de telle sorte que la sous-image correspondant à la phase de fin d'expiration peut directement être utilisée pour visualiser les coronaires. Dans un échantillon de volontaires en bonne santé, cette stratégie a amélioré la netteté et la visualisation des artères coronaires par rapport à une méthode classique SN ID. Pour la visualisation des parois des vaisseaux et de plaques d'athérosclérose, la technique de pointe avec double récupération d'inversion (DIR) est capa¬ble de supprimer le signal provenant du sang et de fournir un contraste posi¬tif entre la paroi et la lumière. Pourtant, il est difficile d'obtenir un contraste optimal car cela est soumis à la variabilité du rythme cardiaque. Par ailleurs, l'imagerie DIR est inefficace du point de vue du temps et les acquisitions "mul- tislice" peuvent conduire à des temps de scan prolongés. En réponse à ce prob¬lème et comme quatrième étude de cette thèse (chapitre 3, Vessel Wall MRI of the Carotid Arteries), une méthode de DIR phase-sensitive a été implémenté et testé

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Complete sex chromosome dosage compensation has more often been observed in XY than ZW species. In this study, using a population genetic model and the chicken transcriptome, we assess whether sexual conflict can account for this difference. Sexual conflict over expression is inevitable when mutation effects are correlated across the sexes, as compensatory mutations in the heterogametic sex lead to hyperexpression in the homogametic sex. Coupled with stronger selection and greater reproductive variance in males, this results in slower and less complete evolution of Z compared with X dosage compensation. Using expression variance as a measure of selection strength, we find that, as predicted by the model, dosage compensation in the chicken is most pronounced in genes that are under strong selection biased towards females. Our study explains the pattern of weak dosage compensation in ZW systems, and suggests that sexual selection plays a major role in shaping sex chromosome dosage compensation.