123 resultados para Protein Array Analysis -- methods
Resumo:
Contact aureoles provide an excellent geologic environment to study the mechanisms of metamorphic reactions in a natural system. The Torres del Paine (TP) intrusion is one of the most spectacular natural laboratories because of its excellent outcrop conditions. It formed in a period from 12.59 to 12.43 Ma and consists of three large granite and four smaller mafic batches. The oldest granite is on top, the youngest at the bottom of the granitic complex, and the granites overly the mafic laccolith. The TP intruded at a depth of 2-3 km into regional metamorphic anchizone to greenschist facies pelites, sandstones, and conglomerates of the Cerro Toro and Punta Barrosa formations. It formed a thin contact aureole of 150-400 m width. This thesis focuses on the reaction kinetics of the mineral cordierite in the contact aureole using quantitative textural analysis methods. First cordierite was formed from chlorite break¬down (zone I, ca. 480 °C, 750 bar). The second cordierite forming reaction was the muscovite break-down, which is accompanied by a modal decrease in biotite and the appearance of k- feldspar (zone II, 540-550 °C, 750 bar). Crystal sizes of the roundish, poikiloblastic cordierites were determined from microscope thin section images by manually marking each crystal. Images were then automatically processed with Matlab. The correction for the intersection probability of each crystal radius yields the crystal size distribution in the rock. Samples from zone I below the laccolith have the largest crystals (0.09 mm). Cordierites from zone II are smaller, with a maximum crystal radius of 0.057 mm. Rocks from zone II have a larger number of small cordierite crystals than rocks from zone I. A combination of these quantitative analysis with numerical modeling of nucleation and growth, is used to infer nucleation and growth parameters which are responsible for the observed mineral textures. For this, the temperature-time paths of the samples need to be known. The thermal history is complex because the main body of the intrusion was formed by several intrusive batches. The emplacement mechanism and duration of each batch can influence the thermal structure in the aureole. A possible subdivision of batches in smaller increments, so called pulses, will focus heat at the side of the intrusion. Focusing all pulses on one side increases the contact aureole size on that side, but decreases it on the other side. It forms a strongly asymmetric contact aureole. Detailed modeling shows that the relative thicknesses of the TP contact aureole above and below the intrusion (150 and 400 m) are best explained by a rapid emplacement of at least the oldest granite batch. Nevertheless, temperatures are significantly too low in all models, compared to observed mineral assemblages in the hornfelses. Hence, an other important thermal mechanisms needs to take place in the host rock. Clastic minerals in the immature sediments outside the contact aureole are hydrated due to small amounts of expelled fluids during contact metamorphism. This leads to a temperature increase of up to 50 °C. The origin of fluids can be traced by stable isotopes. Whole rock stable isotope data (6D and δ180) and chlorine concentrations in biotite document that the TP intrusion induced only very small amounts of fluid flow. Oxygen whole rock data show δ180 values between 9.0 and 10.0 %o within the first 5 m of the contact. Values increase to 13.0 - 15.0 %o further away from the intrusion. Whole rock 6D values display a more complex zoning. First, host rock values (-90 to -70 %o) smoothly decrease towards the contact by ca. 20 %o, up to a distance of ca. 150 m. This is followed by an increase of ca. 20 %o within the innermost 150 m of the aureole (-97.0 to -78 %o at the contact). The initial decrease in 6D values is interpreted to be due to Rayleigh fractionation accompanying the dehydration reactions forming cordierite, while the final increase reflects infiltration of water-rich fluids from the intrusion. An over-estimate on the quantity and the corresponding thermal effect yields a temperature increase of less than 30 °C. This suggests that fluid flow might have contributed only for a small amount to the thermal evolution of the system. A combination of the numerical growth model with the thermal model, including the hydration reaction enthalpies but neglecting fluid flow and incremental growth, can be used to numerically reproduce the observed cordierite textures in the contact aureole. This yields kinetic parameters which indicate fast cordierite crystallization before the thermal peak in the inner aureole, and continued reaction after the thermal peak in the outermost aureole. Only small temperature dependencies of the kinetic parameters seem to be needed to explain the obtained crystal size data. - Les auréoles de contact offrent un cadre géologique privilégié pour l'étude des mécanismes de réactions métamorphiques associés à la mise en place de magmas dans la croûte terrestre. Par ses conditions d'affleurements excellentes, l'intrusion de Torres del Paine représente un site exceptionnel pour améliorer nos connaissances de ces processus. La formation de cette intrusion composée de trois injections granitiques principales et de quatre injections mafiques de volume inférieur couvre une période allant de 12.50 à 12.43 Ma. Le plus vieux granite forme la partie sommitale de l'intrusion alors que l'injection la plus jeune s'observe à la base du complexe granitique; les granites recouvrent la partie mafique du laccolite. L'intrusion du Torres del Paine s'est mise en place a 2-3 km de profondeur dans un encaissant métamorphique. Cet encaissant est caractérisé par un métamorphisme régional de faciès anchizonal à schiste vert et est composé de pélites, de grès, et des conglomérats des formations du Cerro Toro et Punta Barrosa. La mise en place des différentes injections granitiques a généré une auréole de contact de 150-400 m d'épaisseur autour de l'intrusion. Cette thèse se concentre sur la cinétique de réaction associée à la formation de la cordiérite dans les auréoles de contact en utilisant des méthodes quantitatives d'analyses de texture. On observe plusieurs générations de cordiérite dans l'auréole de contact. La première cordiérite est formée par la décomposition de la chlorite (zone I, environ 480 °C, 750 bar), alors qu'une seconde génération de cordiérite est associée à la décomposition de la muscovite, laquelle est accompagnée par une diminution modale de la teneur en biotite et l'apparition de feldspath potassique (zone II, 540-550 °C, 750 bar). Les tailles des cristaux de cordiérites arrondies et blastic ont été déterminées en utilisant des images digitalisées des lames minces et en marquant individuellement chaque cristal. Les images sont ensuite traitées automatiquement à l'aide du programme Matlab. La correction de la probabilité d'intersection en fonction du rayon des cristaux permet de déterminer la distribution de la taille des cristaux dans la roche. Les échantillons de la zone I, en dessous du lacolite, sont caractérisés par de relativement grands cristaux (0.09 mm). Les cristaux de cordiérite de la zone II sont plus petits, avec un rayon maximal de 0.057 mm. Les roches de la zone II présentent un plus grand nombre de petits cristaux de cordiérite que les roches de la zone I. Une combinaison de ces analyses quantitatives avec un modèle numérique de nucléation et croissance a été utilisée pour déduire les paramètres de nucléation et croissance contrôlant les différentes textures minérales observées. Pour développer le modèle de nucléation et de croissance, il est nécessaire de connaître le chemin température - temps des échantillons. L'histoire thermique est complexe parce que l'intrusion est produite par plusieurs injections successives. En effet, le mécanisme d'emplace¬ment et la durée de chaque injection peuvent influencer la structure thermique dans l'auréole. Une subdivision des injections en plus petits incréments, appelés puises, permet de concentrer la chaleur dans les bords de l'intrusion. Une mise en place préférentielle de ces puises sur un côté de l'intrusion modifie l'apport thermique et influence la taille de l'auréole de contact produite, auréole qui devient asymétrique. Dans le cas de la première injection de granite, une modélisation détaillée montre que l'épaisseur relative de l'auréole de contact de Torres del Paine au-dessus et en dessous de l'intrusion (150 et 400 m) est mieux expliquée par un emplacement rapide du granite. Néanmoins, les températures calculées dans l'auréole de con¬tact sont trop basses pour que les modèles thermiques soient cohérants par rapport à la taille de cette auréole. Ainsi, un autre mecanisme exothermique est nécessaire pour permettre à la roche encais¬sante de produire les assemblages observés. L'observation des roches encaissantes entourant les granites montre que les minéraux clastiques dans les sédiments immatures au-dehors de l'auréole sont hydratés suite à la petite quantité de fluide expulsée durant le métamorphisme de contact et/ou la mise en place des granites. Les réactions d'hydratation peuvent permettre une augmentation de la température jusqu'à 50 °C. Afin de déterminer l'origine des fluides, une étude isotopique de roches de l'auréole de contact a été entreprise. Les isotopes stables d'oxygène et d'hydrogène sur la roche totale ainsi que la concentration en chlore dans la biotite indiquent que la mise en place des granites du Torres del Paine n'induit qu'une circulation de fluide limitée. Les données d'oxygène sur roche totale montrent des valeurs δ180 entre 9.0 et 10.0%o au sein des cinq premiers mètres du contact. Les valeurs augmentent jusqu'à 13.0 - 15.0 plus on s'éloigne de l'intrusion. Les valeurs 5D sur roche totale montrent une zonation plus complexe. Les valeurs de la roche encaissante (-90 à -70%o) diminuent progressivement d'environ 20%o depuis l'extérieur de l'auréole jusqu'à une distance d'environ 150 m du granite. Cette diminution est suivie par une augmentation d'environ 20%o au sein des 150 mètres les plus proches du contact (-97.0 à -78%o au contact). La diminution initiale des valeurs de 6D est interprétée comme la conséquence du fractionnement de Rayleigh qui accompagne les réactions de déshydratation formant la cordiérite, alors que l'augmentation finale reflète l'infiltration de fluide riche en eau venant de l'intrusion. A partir de ces résultats, le volume du fluide issu du granite ainsi que son effet thermique a pu être estimé. Ces résultats montrent que l'augmentation de température associée à ces fluides est limitée à un maximum de 30 °C. La contribution de ces fluides dans le bilan thermique est donc faible. Ces différents résultats nous ont permis de créer un modèle thermique associé à la for¬mation de l'auréole de contact qui intègre la mise en place rapide du granite et les réactions d'hydratation lors du métamorphisme. L'intégration de ce modèle thermique dans le modèle numérique de croissance minérale nous permet de calculer les textures des cordiérites. Cepen¬dant, ce modèle est dépendant de la vitesse de croissance et de nucléation de ces cordiérites. Nous avons obtenu ces paramètres en comparant les textures prédites par le modèle et les textures observées dans les roches de l'auréole de contact du Torres del Paine. Les paramètres cinétiques extraits du modèle optimisé indiquent une cristallisation rapide de la cordiérite avant le pic thermique dans la partie interne de l'auréole, et une réaction continue après le pic thermique dans la partie la plus externe de l'auréole. Seules de petites dépendances de température des paramètres de cinétique semblent être nécessaires pour expliquer les don¬nées obtenues sur la distribution des tailles de cristaux. Ces résultats apportent un éclairage nouveau sur la cinétique qui contrôle les réactions métamorphiques.
Resumo:
OBJECTIVE: Surface magnetic resonance imaging (MRI) for aortic plaque assessment is limited by the trade-off between penetration depth and signal-to-noise ratio (SNR). For imaging the deep seated aorta, a combined surface and transesophageal MRI (TEMRI) technique was developed 1) to determine the individual contribution of TEMRI and surface coils to the combined signal, 2) to measure the signal improvement of a combined surface and TEMRI over surface MRI, and 3) to assess for reproducibility of plaque dimension analysis. METHODS AND RESULTS: In 24 patients six black blood proton-density/T2-weighted fast-spin echo images were obtained using three surface and one TEMRI coil for SNR measurements. Reproducibility of plaque dimensions (combined surface and TEMRI) was measured in 10 patients. TEMRI contributed 68% of the signal in the aortic arch and descending aorta, whereas the overall signal gain using the combined technique was up to 225%. Plaque volume measurements had an intraclass correlation coefficient of as high as 0.97. CONCLUSION: Plaque volume measurements for the quantification of aortic plaque size are highly reproducible for combined surface and TEMRI. The TEMRI coil contributes considerably to the aortic MR signal. The combined surface and TEMRI approach improves aortic signal significantly as compared to surface coils alone. CONDENSED ABSTRACT: Conventional MRI aortic plaque visualization is limited by the penetration depth of MRI surface coils and may lead to suboptimal image quality with insufficient reproducibility. By combining a transesophageal MRI (TEMRI) with surface MRI coils we enhanced local and overall image SNR for improved image quality and reproducibility.
Resumo:
The use of self-calibrating techniques in parallel magnetic resonance imaging eliminates the need for coil sensitivity calibration scans and avoids potential mismatches between calibration scans and subsequent accelerated acquisitions (e.g., as a result of patient motion). Most examples of self-calibrating Cartesian parallel imaging techniques have required the use of modified k-space trajectories that are densely sampled at the center and more sparsely sampled in the periphery. However, spiral and radial trajectories offer inherent self-calibrating characteristics because of their densely sampled center. At no additional cost in acquisition time and with no modification in scanning protocols, in vivo coil sensitivity maps may be extracted from the densely sampled central region of k-space. This work demonstrates the feasibility of self-calibrated spiral and radial parallel imaging using a previously described iterative non-Cartesian sensitivity encoding algorithm.
Resumo:
Adiponectin is an adipokine, present in the circulation in comparatively high concentrations and different molecular weight isoforms. For the first time, the distribution of these isoforms in serum and follicular fluid (FF) and their usefulness as biological markers for infertility investigations was studied. In vitro study. University based hospital. Fifty-four women undergoing intracytoplasmic sperm injection (ICSI). Oocytes were retrieved, fertilized in vitro using ICSI, and the resulting embryos transferred. Serum was collected immediately prior to oocyte retrieval. Adiponectin isoforms (high molecular weight (HMW), medium and low molecular weight) were determined in serum and FF. Total adiponectin and the different isoform levels were compared with leptin and ovarian steroid concentrations. Adiponectin isoforms in serum and FF. Adiponectin isoform distribution differed between serum and FF; the HMW fraction made up half of all adiponectin in the serum but only 23.3% in the FF. Total and HMW adiponectin in both serum and FF correlated negatively with the body mass index and the concentration of leptin. No correlations were observed for total adiponectin or its isoforms with estradiol, progesterone, anti-Mullerian hormone, inhibin B, or the total follicle stimulating hormone (FSH) dose administered during the ovarian stimulation phase. This study shows for the first time that adiponectin isoform distribution varies between the serum and FF compartments in gonadotropin stimulated patients. A trend towards higher HMW adiponectin serum levels in successful ICSI cycles compared to implantation failures was observed; studies with larger patient groups are required to confirm this observation.
Resumo:
Under the influence of intelligence-led policing models, crime analysis methods have known of important developments in recent years. Applications have been proposed in several fields of forensic science to exploit and manage various types of material evidence in a systematic and more efficient way. However, nothing has been suggested so far in the field of false identity documents.This study seeks to fill this gap by proposing a simple and general method for profiling false identity documents which aims to establish links based on their visual forensic characteristics. A sample of more than 200 false identity documents including French stolen blank passports, counterfeited driving licenses from Iraq and falsified Bulgarian driving licenses was gathered from nine Swiss police departments and integrated into an ad hoc developed database called ProfID. Links detected automatically and systematically through this database were exploited and analyzed to produce strategic and tactical intelligence useful to the fight against identity document fraud.The profiling and intelligence process established for these three types of false identity documents has confirmed its efficiency, more than 30% of documents being linked. Identity document fraud appears as a structured and interregional criminality, against which material and forensic links detected between false identity documents might serve as a tool for investigation.
Resumo:
This contribution introduces Data Envelopment Analysis (DEA), a performance measurement technique. DEA helps decision makers for the following reasons: (1) By calculating an efficiency score, it indicates if a firm is efficient or has capacity for improvement; (2) By setting target values for input and output, it calculates how much input must be decreased or output increased in order to become efficient; (3) By identifying the nature of returns to scale, it indicates if a firm has to decrease or increase its scale (or size) in order to minimise the average total cost; (4) By identifying a set of benchmarks, it specifies which other firms' processes need to be analysed in order to improve its own practices. This contribution presents the essentials about DEA, alongside a case study to intuitively understand its application. It also introduces Win4DEAP, a software package that conducts efficiency analysis based on DEA methodology. The methodical background of DEA is presented for more demanding readers. Finally, four advanced topics of DEA are treated: adjustment to the environment, preferences, sensitivity analysis and time series data.
Resumo:
Travaux effectués dans le cadre de l'étude "Case Mix" menée par l'Institut universitaire de médecine sociale et préventive de Lausanne et le Service de la santé publique et de la planification sanitaire du canton de Vaud, en collaboration avec les cantons de Berne, Fribourg, Genève, Jura, Neuchâtel, Soleure, Tessin et Valais
Resumo:
Purpose:To describe a novel in silico method to gather and analyze data from high-throughput heterogeneous experimental procedures, i.e. gene and protein expression arrays. Methods:Each microarray is assigned to a database which handles common data (names, symbols, antibody codes, probe IDs, etc.). Links between informations are automatically generated from knowledge obtained in freely accessible databases (NCBI, Swissprot, etc). Requests can be made from any point of entry and the displayed result is fully customizable. Results:The initial database has been loaded with two sets of data: a first set of data originating from an Affymetrix-based retinal profiling performed in an RPE65 knock-out mouse model of Leber's congenital amaurosis. A second set of data generated from a Kinexus microarray experiment done on the retinas from the same mouse model has been added. Queries display wild type versus knock out expressions at several time points for both genes and proteins. Conclusions:This freely accessible database allows for easy consultation of data and facilitates data mining by integrating experimental data and biological pathways.
Resumo:
Recent findings suggest an association between exposure to cleaning products and respiratory dysfunctions including asthma. However, little information is available about quantitative airborne exposures of professional cleaners to volatile organic compounds deriving from cleaning products. During the first phases of the study, a systematic review of cleaning products was performed. Safety data sheets were reviewed to assess the most frequently added volatile organic compounds. It was found that professional cleaning products are complex mixtures of different components (compounds in cleaning products: 3.5 ± 2.8), and more than 130 chemical substances listed in the safety data sheets were identified in 105 products. The main groups of chemicals were fragrances, glycol ethers, surfactants, solvents; and to a lesser extent phosphates, salts, detergents, pH-stabilizers, acids, and bases. Up to 75% of products contained irritant (Xi), 64% harmful (Xn) and 28% corrosive (C) labeled substances. Hazards for eyes (59%), skin (50%) and by ingestion (60%) were the most reported. Monoethanolamine, a strong irritant and known to be involved in sensitizing mechanisms as well as allergic reactions, is frequently added to cleaning products. Monoethanolamine determination in air has traditionally been difficult and air sampling and analysis methods available were little adapted for personal occupational air concentration assessments. A convenient method was developed with air sampling on impregnated glass fiber filters followed by one step desorption, gas chromatography and nitrogen phosphorous selective detection. An exposure assessment was conducted in the cleaning sector, to determine airborne concentrations of monoethanolamine, glycol ethers, and benzyl alcohol during different cleaning tasks performed by professional cleaning workers in different companies, and to determine background air concentrations of formaldehyde, a known indoor air contaminant. The occupational exposure study was carried out in 12 cleaning companies, and personal air samples were collected for monoethanolamine (n=68), glycol ethers (n=79), benzyl alcohol (n=15) and formaldehyde (n=45). All but ethylene glycol mono-n-butyl ether air concentrations measured were far below (<1/10) of the Swiss eight hours occupational exposure limits, except for butoxypropanol and benzyl alcohol, where no occupational exposure limits were available. Although only detected once, ethylene glycol mono-n-butyl ether air concentrations (n=4) were high (49.5 mg/m3 to 58.7 mg/m3), hovering at the Swiss occupational exposure limit (49 mg/m3). Background air concentrations showed no presence of monoethanolamine, while the glycol ethers were often present, and formaldehyde was universally detected. Exposures were influenced by the amount of monoethanolamine in the cleaning product, cross ventilation and spraying. The collected data was used to test an already existing exposure modeling tool during the last phases of the study. The exposure estimation of the so called Bayesian tool converged with the measured range of exposure the more air concentrations of measured exposure were added. This was best described by an inverse 2nd order equation. The results suggest that the Bayesian tool is not adapted to predict low exposures. The Bayesian tool should be tested also with other datasets describing higher exposures. Low exposures to different chemical sensitizers and irritants should be further investigated to better understand the development of respiratory disorders in cleaning workers. Prevention measures should especially focus on incorrect use of cleaning products, to avoid high air concentrations at the exposure limits. - De récentes études montrent l'existence d'un lien entre l'exposition aux produits de nettoyages et les maladies respiratoires telles que l'asthme. En revanche, encore peu d'informations sont disponibles concernant la quantité d'exposition des professionnels du secteur du nettoyage aux composants organiques volatiles provenant des produits qu'ils utilisent. Pendant la première phase de cette étude, un recueil systématique des produits professionnels utilisés dans le secteur du nettoyage a été effectué. Les fiches de données de sécurité de ces produits ont ensuite été analysées, afin de répertorier les composés organiques volatiles les plus souvent utilisés. Il a été mis en évidence que les produits de nettoyage professionnels sont des mélanges complexes de composants chimiques (composants chimiques dans les produits de nettoyage : 3.5 ± 2.8). Ainsi, plus de 130 substances listées dans les fiches de données de sécurité ont été retrouvées dans les 105 produits répertoriés. Les principales classes de substances chimiques identifiées étaient les parfums, les éthers de glycol, les agents de surface et les solvants; dans une moindre mesure, les phosphates, les sels, les détergents, les régulateurs de pH, les acides et les bases ont été identifiés. Plus de 75% des produits répertoriés contenaient des substances décrites comme irritantes (Xi), 64% nuisibles (Xn) et 28% corrosives (C). Les risques pour les yeux (59%), la peau (50%) et par ingestion (60%) était les plus mentionnés. La monoéthanolamine, un fort irritant connu pour être impliqué dans les mécanismes de sensibilisation tels que les réactions allergiques, est fréquemment ajouté aux produits de nettoyage. L'analyse de la monoéthanolamine dans l'air a été habituellement difficile et les échantillons d'air ainsi que les méthodes d'analyse déjà disponibles étaient peu adaptées à l'évaluation de la concentration individuelle d'air aux postes de travail. Une nouvelle méthode plus efficace a donc été développée en captant les échantillons d'air sur des filtres de fibre de verre imprégnés, suivi par une étape de désorption, puis une Chromatographie des gaz et enfin une détection sélective des composants d'azote. Une évaluation de l'exposition des professionnels a été réalisée dans le secteur du nettoyage afin de déterminer la concentration atmosphérique en monoéthanolamine, en éthers de glycol et en alcool benzylique au cours des différentes tâches de nettoyage effectuées par les professionnels du nettoyage dans différentes entreprises, ainsi que pour déterminer les concentrations atmosphériques de fond en formaldéhyde, un polluant de l'air intérieur bien connu. L'étude de l'exposition professionnelle a été effectuée dans 12 compagnies de nettoyage et les échantillons d'air individuels ont été collectés pour l'éthanolamine (n=68), les éthers de glycol (n=79), l'alcool benzylique (n=15) et le formaldéhyde (n=45). Toutes les substances mesurées dans l'air, excepté le 2-butoxyéthanol, étaient en-dessous (<1/10) de la valeur moyenne d'exposition aux postes de travail en Suisse (8 heures), excepté pour le butoxypropanol et l'alcool benzylique, pour lesquels aucune valeur limite d'exposition n'était disponible. Bien que détecté qu'une seule fois, les concentrations d'air de 2-butoxyéthanol (n=4) étaient élevées (49,5 mg/m3 à 58,7 mg/m3), se situant au-dessus de la frontière des valeurs limites d'exposition aux postes de travail en Suisse (49 mg/m3). Les concentrations d'air de fond n'ont montré aucune présence de monoéthanolamine, alors que les éthers de glycol étaient souvent présents et les formaldéhydes quasiment toujours détectés. L'exposition des professionnels a été influencée par la quantité de monoéthanolamine présente dans les produits de nettoyage utilisés, par la ventilation extérieure et par l'emploie de sprays. Durant la dernière phase de l'étude, les informations collectées ont été utilisées pour tester un outil de modélisation de l'exposition déjà existant, l'outil de Bayesian. L'estimation de l'exposition de cet outil convergeait avec l'exposition mesurée. Cela a été le mieux décrit par une équation du second degré inversée. Les résultats suggèrent que l'outil de Bayesian n'est pas adapté pour mettre en évidence les taux d'expositions faibles. Cet outil devrait également être testé avec d'autres ensembles de données décrivant des taux d'expositions plus élevés. L'exposition répétée à des substances chimiques ayant des propriétés irritatives et sensibilisantes devrait être investiguée d'avantage, afin de mieux comprendre l'apparition de maladies respiratoires chez les professionnels du nettoyage. Des mesures de prévention devraient tout particulièrement être orientées sur l'utilisation correcte des produits de nettoyage, afin d'éviter les concentrations d'air élevées se situant à la valeur limite d'exposition acceptée.
Resumo:
Measuring school efficiency is a challenging task. First, a performance measurement technique has to be selected. Within Data Envelopment Analysis (DEA), one such technique, alternative models have been developed in order to deal with environmental variables. The majority of these models lead to diverging results. Second, the choice of input and output variables to be included in the efficiency analysis is often dictated by data availability. The choice of the variables remains an issue even when data is available. As a result, the choice of technique, model and variables is probably, and ultimately, a political judgement. Multi-criteria decision analysis methods can help the decision makers to select the most suitable model. The number of selection criteria should remain parsimonious and not be oriented towards the results of the models in order to avoid opportunistic behaviour. The selection criteria should also be backed by the literature or by an expert group. Once the most suitable model is identified, the principle of permanence of methods should be applied in order to avoid a change of practices over time. Within DEA, the two-stage model developed by Ray (1991) is the most convincing model which allows for an environmental adjustment. In this model, an efficiency analysis is conducted with DEA followed by an econometric analysis to explain the efficiency scores. An environmental variable of particular interest, tested in this thesis, consists of the fact that operations are held, for certain schools, on multiple sites. Results show that the fact of being located on more than one site has a negative influence on efficiency. A likely way to solve this negative influence would consist of improving the use of ICT in school management and teaching. Planning new schools should also consider the advantages of being located on a unique site, which allows reaching a critical size in terms of pupils and teachers. The fact that underprivileged pupils perform worse than privileged pupils has been public knowledge since Coleman et al. (1966). As a result, underprivileged pupils have a negative influence on school efficiency. This is confirmed by this thesis for the first time in Switzerland. Several countries have developed priority education policies in order to compensate for the negative impact of disadvantaged socioeconomic status on school performance. These policies have failed. As a result, other actions need to be taken. In order to define these actions, one has to identify the social-class differences which explain why disadvantaged children underperform. Childrearing and literary practices, health characteristics, housing stability and economic security influence pupil achievement. Rather than allocating more resources to schools, policymakers should therefore focus on related social policies. For instance, they could define pre-school, family, health, housing and benefits policies in order to improve the conditions for disadvantaged children.
Resumo:
Since the early days of functional magnetic resonance imaging (fMRI), retinotopic mapping emerged as a powerful and widely-accepted tool, allowing the identification of individual visual cortical fields and furthering the study of visual processing. In contrast, tonotopic mapping in auditory cortex proved more challenging primarily because of the smaller size of auditory cortical fields. The spatial resolution capabilities of fMRI have since advanced, and recent reports from our labs and several others demonstrate the reliability of tonotopic mapping in human auditory cortex. Here we review the wide range of stimulus procedures and analysis methods that have been used to successfully map tonotopy in human auditory cortex. We point out that recent studies provide a remarkably consistent view of human tonotopic organisation, although the interpretation of the maps continues to vary. In particular, there remains controversy over the exact orientation of the primary gradients with respect to Heschl's gyrus, which leads to different predictions about the location of human A1, R, and surrounding fields. We discuss the development of this debate and argue that literature is converging towards an interpretation that core fields A1 and R fold across the rostral and caudal banks of Heschl's gyrus, with tonotopic gradients laid out in a distinctive V-shaped manner. This suggests an organisation that is largely homologous with non-human primates. This article is part of a Special Issue entitled Human Auditory Neuroimaging.
Resumo:
Genome-wide association studies have been instrumental in identifying genetic variants associated with complex traits such as human disease or gene expression phenotypes. It has been proposed that extending existing analysis methods by considering interactions between pairs of loci may uncover additional genetic effects. However, the large number of possible two-marker tests presents significant computational and statistical challenges. Although several strategies to detect epistasis effects have been proposed and tested for specific phenotypes, so far there has been no systematic attempt to compare their performance using real data. We made use of thousands of gene expression traits from linkage and eQTL studies, to compare the performance of different strategies. We found that using information from marginal associations between markers and phenotypes to detect epistatic effects yielded a lower false discovery rate (FDR) than a strategy solely using biological annotation in yeast, whereas results from human data were inconclusive. For future studies whose aim is to discover epistatic effects, we recommend incorporating information about marginal associations between SNPs and phenotypes instead of relying solely on biological annotation. Improved methods to discover epistatic effects will result in a more complete understanding of complex genetic effects.
Resumo:
Amplified Fragment Length Polymorphisms (AFLPs) are a cheap and efficient protocol for generating large sets of genetic markers. This technique has become increasingly used during the last decade in various fields of biology, including population genomics, phylogeography, and genome mapping. Here, we present RawGeno, an R library dedicated to the automated scoring of AFLPs (i.e., the coding of electropherogram signals into ready-to-use datasets). Our program includes a complete suite of tools for binning, editing, visualizing, and exporting results obtained from AFLP experiments. RawGeno can either be used with command lines and program analysis routines or through a user-friendly graphical user interface. We describe the whole RawGeno pipeline along with recommendations for (a) setting the analysis of electropherograms in combination with PeakScanner, a program freely distributed by Applied Biosystems; (b) performing quality checks; (c) defining bins and proceeding to scoring; (d) filtering nonoptimal bins; and (e) exporting results in different formats.
Resumo:
INTRODUCTION: Triple-negative breast cancers (TNBCs) are characterised by lack of expression of hormone receptors and epidermal growth factor receptor 2 (HER-2). As they frequently express epidermal growth factor receptors (EGFRs), anti-EGFR therapies are currently assessed for this breast cancer subtype as an alternative to treatments that target HER-2 or hormone receptors. Recently, EGFR-activating mutations have been reported in TNBC specimens in an East Asian population. Because variations in the frequency of EGFR-activating mutations in East Asians and other patients with lung cancer have been described, we evaluated the EGFR mutational profile in tumour samples from European patients with TNBC. METHODS: We selected from a DNA tumour bank 229 DNA samples isolated from frozen, histologically proven and macrodissected invasive TNBC specimens from European patients. PCR and high-resolution melting (HRM) analyses were used to detect mutations in exons 19 and 21 of EGFR. The results were then confirmed by bidirectional sequencing of all samples. RESULTS: HRM analysis allowed the detection of three EGFR exon 21 mutations, but no exon 19 mutations. There was 100% concordance between the HRM and sequencing results. The three patients with EGFR exon 21 abnormal HRM profiles harboured the rare R836R SNP, but no EGFR-activating mutation was identified. CONCLUSIONS: This study highlights variations in the prevalence of EGFR mutations in TNBC. These variations have crucial implications for the design of clinical trials involving anti-EGFR treatments in TNBC and for identifying the potential target population.
Resumo:
The Swiss National Science Foundation made a call for National Centers fo Competence in Research (NCCR) for the first time in 1999 and 2004. Together, these announcements concerned all disciplines and led to 126 preproposals, which were put forward by 2134 men and women researchers. It can be assumed that this operation mobilised Swiss researchers who regarded themselves as particularly well qualified to conduct high-level research in their field. The article uses network analysis and regression analysis methods to examine to what extend women had a lower success rate than men in the two selection rounds because of their sex. On the whole, the findings attest the gender neutrality of the National Science Foundation's selection procedures. However, they also confirm the well-known fact that women scientists are less represented in the higher echelons of academia and concentrated in the social sciences and humanities, as well as showing that this concentration reduces women's chances of success in scientific competition. The article shows that unequal gender-specific success rates prior to the NCCR funding contest play a fairly significant role.