957 resultados para Instrument Development
Resumo:
A questionnaire assessing the satisfaction of children with their hospital stay has been developed and tested with 136 children (aged 6-12 years) at 2 Swiss hospital sites. Three out of 4 children were satisfied overall with their hospital stay. Their relationships with the professional medical staff, explanations they received, games they played, and environment, all received positive evaluations. The most critical points were pain, fear, and the absence of relatives. Ninety percent of the children appreciated that their opinions were sought. These results reinforce the importance of having questionnaires available for the children to consider their opinions to enhance the quality of care.
Resumo:
Un système efficace de sismique tridimensionnelle (3-D) haute-résolution adapté à des cibles lacustres de petite échelle a été développé. Dans le Lac Léman, près de la ville de Lausanne, en Suisse, des investigations récentes en deux dimension (2-D) ont mis en évidence une zone de faille complexe qui a été choisie pour tester notre système. Les structures observées incluent une couche mince (<40 m) de sédiments quaternaires sub-horizontaux, discordants sur des couches tertiaires de molasse pentées vers le sud-est. On observe aussi la zone de faille de « La Paudèze » qui sépare les unités de la Molasse du Plateau de la Molasse Subalpine. Deux campagnes 3-D complètes, d?environ d?un kilomètre carré, ont été réalisées sur ce site de test. La campagne pilote (campagne I), effectuée en 1999 pendant 8 jours, a couvert 80 profils en utilisant une seule flûte. Pendant la campagne II (9 jours en 2001), le nouveau système trois-flûtes, bien paramétrés pour notre objectif, a permis l?acquisition de données de très haute qualité sur 180 lignes CMP. Les améliorations principales incluent un système de navigation et de déclenchement de tirs grâce à un nouveau logiciel. Celui-ci comprend un contrôle qualité de la navigation du bateau en temps réel utilisant un GPS différentiel (dGPS) à bord et une station de référence près du bord du lac. De cette façon, les tirs peuvent être déclenchés tous les 5 mètres avec une erreur maximale non-cumulative de 25 centimètres. Tandis que pour la campagne I la position des récepteurs de la flûte 48-traces a dû être déduite à partir des positions du bateau, pour la campagne II elle ont pu être calculées précisément (erreur <20 cm) grâce aux trois antennes dGPS supplémentaires placées sur des flotteurs attachés à l?extrémité de chaque flûte 24-traces. Il est maintenant possible de déterminer la dérive éventuelle de l?extrémité des flûtes (75 m) causée par des courants latéraux ou de petites variations de trajet du bateau. De plus, la construction de deux bras télescopiques maintenant les trois flûtes à une distance de 7.5 m les uns des autres, qui est la même distance que celle entre les lignes naviguées de la campagne II. En combinaison avec un espacement de récepteurs de 2.5 m, la dimension de chaque «bin» de données 3-D de la campagne II est de 1.25 m en ligne et 3.75 m latéralement. L?espacement plus grand en direction « in-line » par rapport à la direction «cross-line» est justifié par l?orientation structurale de la zone de faille perpendiculaire à la direction «in-line». L?incertitude sur la navigation et le positionnement pendant la campagne I et le «binning» imprécis qui en résulte, se retrouve dans les données sous forme d?une certaine discontinuité des réflecteurs. L?utilisation d?un canon à air à doublechambre (qui permet d?atténuer l?effet bulle) a pu réduire l?aliasing observé dans les sections migrées en 3-D. Celui-ci était dû à la combinaison du contenu relativement haute fréquence (<2000 Hz) du canon à eau (utilisé à 140 bars et à 0.3 m de profondeur) et d?un pas d?échantillonnage latéral insuffisant. Le Mini G.I 15/15 a été utilisé à 80 bars et à 1 m de profondeur, est mieux adapté à la complexité de la cible, une zone faillée ayant des réflecteurs pentés jusqu?à 30°. Bien que ses fréquences ne dépassent pas les 650 Hz, cette source combine une pénétration du signal non-aliasé jusqu?à 300 m dans le sol (par rapport au 145 m pour le canon à eau) pour une résolution verticale maximale de 1.1 m. Tandis que la campagne I a été acquise par groupes de plusieurs lignes de directions alternées, l?optimisation du temps d?acquisition du nouveau système à trois flûtes permet l?acquisition en géométrie parallèle, ce qui est préférable lorsqu?on utilise une configuration asymétrique (une source et un dispositif de récepteurs). Si on ne procède pas ainsi, les stacks sont différents selon la direction. Toutefois, la configuration de flûtes, plus courtes que pour la compagne I, a réduit la couverture nominale, la ramenant de 12 à 6. Une séquence classique de traitement 3-D a été adaptée à l?échantillonnage à haute fréquence et elle a été complétée par deux programmes qui transforment le format non-conventionnel de nos données de navigation en un format standard de l?industrie. Dans l?ordre, le traitement comprend l?incorporation de la géométrie, suivi de l?édition des traces, de l?harmonisation des «bins» (pour compenser l?inhomogénéité de la couverture due à la dérive du bateau et de la flûte), de la correction de la divergence sphérique, du filtrage passe-bande, de l?analyse de vitesse, de la correction DMO en 3-D, du stack et enfin de la migration 3-D en temps. D?analyses de vitesse détaillées ont été effectuées sur les données de couverture 12, une ligne sur deux et tous les 50 CMP, soit un nombre total de 600 spectres de semblance. Selon cette analyse, les vitesses d?intervalles varient de 1450-1650 m/s dans les sédiments non-consolidés et de 1650-3000 m/s dans les sédiments consolidés. Le fait que l?on puisse interpréter plusieurs horizons et surfaces de faille dans le cube, montre le potentiel de cette technique pour une interprétation tectonique et géologique à petite échelle en trois dimensions. On distingue cinq faciès sismiques principaux et leurs géométries 3-D détaillées sur des sections verticales et horizontales: les sédiments lacustres (Holocène), les sédiments glacio-lacustres (Pléistocène), la Molasse du Plateau, la Molasse Subalpine de la zone de faille (chevauchement) et la Molasse Subalpine au sud de cette zone. Les couches de la Molasse du Plateau et de la Molasse Subalpine ont respectivement un pendage de ~8° et ~20°. La zone de faille comprend de nombreuses structures très déformées de pendage d?environ 30°. Des tests préliminaires avec un algorithme de migration 3-D en profondeur avant sommation et à amplitudes préservées démontrent que la qualité excellente des données de la campagne II permet l?application de telles techniques à des campagnes haute-résolution. La méthode de sismique marine 3-D était utilisée jusqu?à présent quasi-exclusivement par l?industrie pétrolière. Son adaptation à une échelle plus petite géographiquement mais aussi financièrement a ouvert la voie d?appliquer cette technique à des objectifs d?environnement et du génie civil.<br/><br/>An efficient high-resolution three-dimensional (3-D) seismic reflection system for small-scale targets in lacustrine settings was developed. In Lake Geneva, near the city of Lausanne, Switzerland, past high-resolution two-dimensional (2-D) investigations revealed a complex fault zone (the Paudèze thrust zone), which was subsequently chosen for testing our system. Observed structures include a thin (<40 m) layer of subhorizontal Quaternary sediments that unconformably overlie southeast-dipping Tertiary Molasse beds and the Paudèze thrust zone, which separates Plateau and Subalpine Molasse units. Two complete 3-D surveys have been conducted over this same test site, covering an area of about 1 km2. In 1999, a pilot survey (Survey I), comprising 80 profiles, was carried out in 8 days with a single-streamer configuration. In 2001, a second survey (Survey II) used a newly developed three-streamer system with optimized design parameters, which provided an exceptionally high-quality data set of 180 common midpoint (CMP) lines in 9 days. The main improvements include a navigation and shot-triggering system with in-house navigation software that automatically fires the gun in combination with real-time control on navigation quality using differential GPS (dGPS) onboard and a reference base near the lake shore. Shots were triggered at 5-m intervals with a maximum non-cumulative error of 25 cm. Whereas the single 48-channel streamer system of Survey I requires extrapolation of receiver positions from the boat position, for Survey II they could be accurately calculated (error <20 cm) with the aid of three additional dGPS antennas mounted on rafts attached to the end of each of the 24- channel streamers. Towed at a distance of 75 m behind the vessel, they allow the determination of feathering due to cross-line currents or small course variations. Furthermore, two retractable booms hold the three streamers at a distance of 7.5 m from each other, which is the same distance as the sail line interval for Survey I. With a receiver spacing of 2.5 m, the bin dimension of the 3-D data of Survey II is 1.25 m in in-line direction and 3.75 m in cross-line direction. The greater cross-line versus in-line spacing is justified by the known structural trend of the fault zone perpendicular to the in-line direction. The data from Survey I showed some reflection discontinuity as a result of insufficiently accurate navigation and positioning and subsequent binning errors. Observed aliasing in the 3-D migration was due to insufficient lateral sampling combined with the relatively high frequency (<2000 Hz) content of the water gun source (operated at 140 bars and 0.3 m depth). These results motivated the use of a double-chamber bubble-canceling air gun for Survey II. A 15 / 15 Mini G.I air gun operated at 80 bars and 1 m depth, proved to be better adapted for imaging the complexly faulted target area, which has reflectors dipping up to 30°. Although its frequencies do not exceed 650 Hz, this air gun combines a penetration of non-aliased signal to depths of 300 m below the water bottom (versus 145 m for the water gun) with a maximum vertical resolution of 1.1 m. While Survey I was shot in patches of alternating directions, the optimized surveying time of the new threestreamer system allowed acquisition in parallel geometry, which is preferable when using an asymmetric configuration (single source and receiver array). Otherwise, resulting stacks are different for the opposite directions. However, the shorter streamer configuration of Survey II reduced the nominal fold from 12 to 6. A 3-D conventional processing flow was adapted to the high sampling rates and was complemented by two computer programs that format the unconventional navigation data to industry standards. Processing included trace editing, geometry assignment, bin harmonization (to compensate for uneven fold due to boat/streamer drift), spherical divergence correction, bandpass filtering, velocity analysis, 3-D DMO correction, stack and 3-D time migration. A detailed semblance velocity analysis was performed on the 12-fold data set for every second in-line and every 50th CMP, i.e. on a total of 600 spectra. According to this velocity analysis, interval velocities range from 1450-1650 m/s for the unconsolidated sediments and from 1650-3000 m/s for the consolidated sediments. Delineation of several horizons and fault surfaces reveal the potential for small-scale geologic and tectonic interpretation in three dimensions. Five major seismic facies and their detailed 3-D geometries can be distinguished in vertical and horizontal sections: lacustrine sediments (Holocene) , glaciolacustrine sediments (Pleistocene), Plateau Molasse, Subalpine Molasse and its thrust fault zone. Dips of beds within Plateau and Subalpine Molasse are ~8° and ~20°, respectively. Within the fault zone, many highly deformed structures with dips around 30° are visible. Preliminary tests with 3-D preserved-amplitude prestack depth migration demonstrate that the excellent data quality of Survey II allows application of such sophisticated techniques even to high-resolution seismic surveys. In general, the adaptation of the 3-D marine seismic reflection method, which to date has almost exclusively been used by the oil exploration industry, to a smaller geographical as well as financial scale has helped pave the way for applying this technique to environmental and engineering purposes.<br/><br/>La sismique réflexion est une méthode d?investigation du sous-sol avec un très grand pouvoir de résolution. Elle consiste à envoyer des vibrations dans le sol et à recueillir les ondes qui se réfléchissent sur les discontinuités géologiques à différentes profondeurs et remontent ensuite à la surface où elles sont enregistrées. Les signaux ainsi recueillis donnent non seulement des informations sur la nature des couches en présence et leur géométrie, mais ils permettent aussi de faire une interprétation géologique du sous-sol. Par exemple, dans le cas de roches sédimentaires, les profils de sismique réflexion permettent de déterminer leur mode de dépôt, leurs éventuelles déformations ou cassures et donc leur histoire tectonique. La sismique réflexion est la méthode principale de l?exploration pétrolière. Pendant longtemps on a réalisé des profils de sismique réflexion le long de profils qui fournissent une image du sous-sol en deux dimensions. Les images ainsi obtenues ne sont que partiellement exactes, puisqu?elles ne tiennent pas compte de l?aspect tridimensionnel des structures géologiques. Depuis quelques dizaines d?années, la sismique en trois dimensions (3-D) a apporté un souffle nouveau à l?étude du sous-sol. Si elle est aujourd?hui parfaitement maîtrisée pour l?imagerie des grandes structures géologiques tant dans le domaine terrestre que le domaine océanique, son adaptation à l?échelle lacustre ou fluviale n?a encore fait l?objet que de rares études. Ce travail de thèse a consisté à développer un système d?acquisition sismique similaire à celui utilisé pour la prospection pétrolière en mer, mais adapté aux lacs. Il est donc de dimension moindre, de mise en oeuvre plus légère et surtout d?une résolution des images finales beaucoup plus élevée. Alors que l?industrie pétrolière se limite souvent à une résolution de l?ordre de la dizaine de mètres, l?instrument qui a été mis au point dans le cadre de ce travail permet de voir des détails de l?ordre du mètre. Le nouveau système repose sur la possibilité d?enregistrer simultanément les réflexions sismiques sur trois câbles sismiques (ou flûtes) de 24 traces chacun. Pour obtenir des données 3-D, il est essentiel de positionner les instruments sur l?eau (source et récepteurs des ondes sismiques) avec une grande précision. Un logiciel a été spécialement développé pour le contrôle de la navigation et le déclenchement des tirs de la source sismique en utilisant des récepteurs GPS différentiel (dGPS) sur le bateau et à l?extrémité de chaque flûte. Ceci permet de positionner les instruments avec une précision de l?ordre de 20 cm. Pour tester notre système, nous avons choisi une zone sur le Lac Léman, près de la ville de Lausanne, où passe la faille de « La Paudèze » qui sépare les unités de la Molasse du Plateau et de la Molasse Subalpine. Deux campagnes de mesures de sismique 3-D y ont été réalisées sur une zone d?environ 1 km2. Les enregistrements sismiques ont ensuite été traités pour les transformer en images interprétables. Nous avons appliqué une séquence de traitement 3-D spécialement adaptée à nos données, notamment en ce qui concerne le positionnement. Après traitement, les données font apparaître différents faciès sismiques principaux correspondant notamment aux sédiments lacustres (Holocène), aux sédiments glacio-lacustres (Pléistocène), à la Molasse du Plateau, à la Molasse Subalpine de la zone de faille et la Molasse Subalpine au sud de cette zone. La géométrie 3-D détaillée des failles est visible sur les sections sismiques verticales et horizontales. L?excellente qualité des données et l?interprétation de plusieurs horizons et surfaces de faille montrent le potentiel de cette technique pour les investigations à petite échelle en trois dimensions ce qui ouvre des voies à son application dans les domaines de l?environnement et du génie civil.
Resumo:
Background: Ethical conflicts are arising as a result of the growing complexity of clinical care, coupled with technological advances. Most studies that have developed instruments for measuring ethical conflict base their measures on the variables"frequency" and"degree of conflict". In our view, however, these variables are insufficient for explaining the root of ethical conflicts. Consequently, the present study formulates a conceptual model that also includes the variable"exposure to conflict", as well as considering six"types of ethical conflict". An instrument was then designed to measure the ethical conflicts experienced by nurses who work with critical care patients. The paper describes the development process and validation of this instrument, the Ethical Conflict in Nursing Questionnaire Critical Care Version (ECNQ-CCV). Methods: The sample comprised 205 nursing professionals from the critical care units of two hospitals in Barcelona (Spain). The ECNQ-CCV presents 19 nursing scenarios with the potential to produce ethical conflict in the critical care setting. Exposure to ethical conflict was assessed by means of the Index of Exposure to Ethical Conflict (IEEC), a specific index developed to provide a reference value for each respondent by combining the intensity and frequency of occurrence of each scenario featured in the ECNQ-CCV. Following content validity, construct validity was assessed by means of Exploratory Factor Analysis (EFA), while Cronbach"s alpha was used to evaluate the instrument"s reliability. All analyses were performed using the statistical software PASW v19. Results: Cronbach"s alpha for the ECNQ-CCV as a whole was 0.882, which is higher than the values reported for certain other related instruments. The EFA suggested a unidimensional structure, with one component accounting for 33.41% of the explained variance. Conclusions: The ECNQ-CCV is shown to a valid and reliable instrument for use in critical care units. Its structure is such that the four variables on which our model of ethical conflict is based may be studied separately or in combination. The critical care nurses in this sample present moderate levels of exposure to ethical conflict. This study represents the first evaluation of the ECNQ-CCV.
Resumo:
Background: Ethical conflicts are arising as a result of the growing complexity of clinical care, coupled with technological advances. Most studies that have developed instruments for measuring ethical conflict base their measures on the variables"frequency" and"degree of conflict". In our view, however, these variables are insufficient for explaining the root of ethical conflicts. Consequently, the present study formulates a conceptual model that also includes the variable"exposure to conflict", as well as considering six"types of ethical conflict". An instrument was then designed to measure the ethical conflicts experienced by nurses who work with critical care patients. The paper describes the development process and validation of this instrument, the Ethical Conflict in Nursing Questionnaire Critical Care Version (ECNQ-CCV). Methods: The sample comprised 205 nursing professionals from the critical care units of two hospitals in Barcelona (Spain). The ECNQ-CCV presents 19 nursing scenarios with the potential to produce ethical conflict in the critical care setting. Exposure to ethical conflict was assessed by means of the Index of Exposure to Ethical Conflict (IEEC), a specific index developed to provide a reference value for each respondent by combining the intensity and frequency of occurrence of each scenario featured in the ECNQ-CCV. Following content validity, construct validity was assessed by means of Exploratory Factor Analysis (EFA), while Cronbach"s alpha was used to evaluate the instrument"s reliability. All analyses were performed using the statistical software PASW v19. Results: Cronbach"s alpha for the ECNQ-CCV as a whole was 0.882, which is higher than the values reported for certain other related instruments. The EFA suggested a unidimensional structure, with one component accounting for 33.41% of the explained variance. Conclusions: The ECNQ-CCV is shown to a valid and reliable instrument for use in critical care units. Its structure is such that the four variables on which our model of ethical conflict is based may be studied separately or in combination. The critical care nurses in this sample present moderate levels of exposure to ethical conflict. This study represents the first evaluation of the ECNQ-CCV.
Resumo:
BACKGROUND: The assessment of Health Related Quality of Life (HRQL) is important in people with dementia as it could influence their care and support plan. Many studies on dementia do not specifically set out to measure dementia-specific HRQL but do include related items. The aim of this study is to explore the distribution of HRQL by functional and socio-demographic variables in a population-based setting. METHODS: Domains of DEMQOL's conceptual framework were mapped in the Cambridge City over 75's Cohort (CC75C) Study. HRQL was estimated in 110 participants aged 80+ years with a confirmed diagnosis of dementia with mild/moderate severity. Acceptability (missing values and normality of the total score), internal consistency (Cronbach's alpha), convergent, discriminant and known group differences validity (Spearman correlations, Wilcoxon Mann-Whitney and Kruskal-Wallis tests) were assessed. The distribution of HRQL by socio-demographic and functional descriptors was explored. RESULTS: The HRQL score ranged from 0 to 16 and showed an internal consistency Alpha of 0.74. Validity of the instrument was found to be acceptable. Men had higher HRQL than women. Marital status had a greater effect on HRQL for men than it did for women. The HRQL of those with good self-reported health was higher than those with fair/poor self-reported health. HRQL was not associated with dementia severity. CONCLUSIONS: To our knowledge this is the first study to examine the distribution of dementia-specific HRQL in a population sample of the very old. We have mapped an existing conceptual framework of dementia specific HRQL onto an existing study and demonstrated the feasibility of this approach. Findings in this study suggest that whereas there is big emphasis in dementia severity, characteristics such as gender should be taken into account when assessing and implementing programmes to improve HRQL.
Resumo:
Electrical keyboard instruments and computer-aided music-making generally base on the piano keyboard that was developed for a tuning system no longer used. Alternative keyboard layout offers at least easier playing, faster adopting, new ways to play and better ergonomics. This thesis explores the development of keyboard instruments and tunings, and different keyboard layouts. This work is preliminary research for an electrical keyboard instrument to be implemented later on.
Resumo:
The development, assessment, and implementation of a program evaluation instrument was carried out to evaluate the impact and efficacy of the EMPOWER Program. This intervention was created to educate residents at a shelter for abused women with an anticipated outcome of prevention. Participants included the staff and residents at 2 shelters in Southern Ontario. Client pre, post and follow-up measures were obtained and analyzed statistically and using keyword content analysis. A single staff measure was obtained and summarized using keyword content analysis. Qualitative results were suggestive of important change in participants. All women in the post and follow-up measures believed their participation in the EMPOWER Program provided them with the knowledge, skills, and confidence to avoid abusive relationships in the fliture. This transformational impact was repeatedly expressed in both resident and staff feedback. Limitations of this research, as well as suggestions for future study were discussed.
Resumo:
A simple, low-cost concentric capillary nebulizer (CCN) was developed and evaluated for ICP spectrometry. The CCN could be operated at sample uptake rates of 0.050-1.00 ml min'^ and under oscillating and non-oscillating conditions. Aerosol characteristics for the CCN were studied using a laser Fraunhofter diffraction analyzer. Solvent transport efficiencies and transport rates, detection limits, and short- and long-term stabilities were evaluated for the CCN with a modified cyclonic spray chamber at different sample uptake rates. The Mg II (280.2nm)/l\/lg 1(285.2nm) ratio was used for matrix effect studies. Results were compared to those with conventional nebulizers, a cross-flow nebulizer with a Scott-type spray chamber, a GemCone nebulizer with a cyclonic spray chamber, and a Meinhard TR-30-K3 concentric nebulizer with a cyclonic spray chamber. Transport efficiencies of up to 57% were obtained for the CCN. For the elements tested, short- and long-term precisions and detection limits obtained with the CCN at 0.050-0.500 ml min'^ are similar to, or better than, those obtained on the same instrument using the conventional nebulizers (at 1.0 ml min'^). The depressive and enhancement effects of easily ionizable element Na, sulfuric acid, and dodecylamine surfactant on analyte signals with the CCN are similar to, or better than, those obtained with the conventional nebulizers. However, capillary clog was observed when the sample solution with high dissolved solids was nebulized for more than 40 min. The effects of data acquisition and data processing on detection limits were studied using inductively coupled plasma-atomic emission spectrometry. The study examined the effects of different detection limit approaches, the effects of data integration modes, the effects of regression modes, the effects of the standard concentration range and the number of standards, the effects of sample uptake rate, and the effect of Integration time. All the experiments followed the same protocols. Three detection limit approaches were examined, lUPAC method, the residual standard deviation (RSD), and the signal-to-background ratio and relative standard deviation of the background (SBR-RSDB). The study demonstrated that the different approaches, the integration modes, the regression methods, and the sample uptake rates can have an effect on detection limits. The study also showed that the different approaches give different detection limits and some methods (for example, RSD) are susceptible to the quality of calibration curves. Multicomponents spectral fitting (MSF) gave the best results among these three integration modes, peak height, peak area, and MSF. Weighted least squares method showed the ability to obtain better quality calibration curves. Although an effect of the number of standards on detection limits was not observed, multiple standards are recommended because they provide more reliable calibration curves. An increase of sample uptake rate and integration time could improve detection limits. However, an improvement with increased integration time on detection limits was not observed because the auto integration mode was used.
Resumo:
Multicoloured Asian Lady Beetles (MALB) and 7-spot Lady Beetles that infect vineyards can secrete alkyl-methoxypyrazines when they are processed with the grapes, resulting in wines containing a taint. The main methoxypyrazine associated with this taint is 3-isopropyl-2-methoxypyrazine (IPMP). The wines are described as having aroma and flavours of peanut butter, peanut shells, asparagus and earthy which collectively, have become known as “ladybug taint”. To date, there are no known fining agents used commercially added to juice or wine that are effective in removing this taint. The goal of this project was to use previously identified proteins with an ability to bind to methoxypyrazines at low pH, and subsequently develop a binding assay to test the ability of these proteins to bind to and remove methoxypyrazines from grape juice. The piglet odorant binding protein (plOBP) and mouse major urinary protein (mMUP) were identified, cloned and expressed in the Pichia pastoris expression system. Protein expression was induced using methanol and the proteins were subsequently purified from the induction media using anion exchange chromatography. The purified proteins were freeze-dried and rehydrated prior to use in the methoxypyrazine removal assay. The expression and purification system resulted in yields of approximately 78% of purified plOBP and 62% of purified mMUP from expression to rehydration. Purified protein values were 87 mg of purified plOPB per litre of induction media and 19 mg of purified mMUP per litre of induction medium. In order to test the ability of the protein to bind to the MPs, an MP removal assay was developed. In the assay, the purified protein is incubated with either IPMP or 3-isobutyl-2-methoxypyrazine (IBMP) for two hours in either buffer or grape juice. Bentonite is then used to capture the protein-MP complex and the bentonite-protein-MP complex is then removed from solution by filtration. Residual MP is measured in solution following the MP removal assay and compared to that in the starting solution by Gas Chromatography Mass Spectrometry (GC/MS). GC/MS results indicated that the mMUP was capable of removing IBMP and IPMP from 300 ng/L in buffer pH 4.0, buffer pH 3.5 and Riesling Juice pH 3.5 down to the limit of quantification of the instrument, which is 6ng/L and 2ng/L for IBMP and IPMP, respectively. The results for the plOBP showed that although it could remove some IBMP, it was only approximately 50-70 ng/L more than bentonite treatment followed by filtration, resulting in approximately 100 ng/L of the MPs being left in solution. pIOBP was not able to remove IPMP in buffer pH 3.5 using this system above that removed by bentonite alone. As well, the pIOBP was not able to remove any additional MPs from Chardonnay juice pH 3.5 above that already removed by the bentonite and filtration alone. The mouse MUP was shown to be a better candidate protein for removal of MPs from juice using this system.
Resumo:
[Support Institutions:] Department of Administration of Health, University of Montreal, Canada Public Health School of Fudan University, Shanghai, China
Resumo:
Réalisé en cotutelle avec l'Université Joseph Fourier École Doctorale Ingénierie pour la Santé,la Cognition et l'Environnement (France)
Resumo:
Le concept de transit-oriented development (TOD) est habituellement abordé en tant qu'ensemble fixe de caractéristiques physico-spatiales. Cette approche ne s'avere pas suffisante pour comprendre comment le TOD peut être pertinent pour coordonner les transports et l'urbanisme dans un contexte urbain concret. Afin de combler cette lacune, nous proposons d'étudier le TOD en tant qu'instrument d'action publique, adopté, produit et utilisé par les acteurs sur le territoire en fonction des enjeux, des ressources et des contraintes qui y sont présents. Ce travail exploratoire est fondé sur une étude de cas de la mise en place du TOD a Sainte-Thérese. Pour comprendre comment se sont arrimés les intérêts des acteurs impliqués et ce qui en a résulté, la collecte de données se fait a l'aide de sources écrites, mais aussi d’entretiens semi-directifs avec les acteurs clé. Les résultats de l'étude confirment la pertinence de l'approche adoptée pour aborder le TOD. Le TOD comme instrument d'action publique peut être qualifié d'opérateur de congruence, car il permet de faire converger les intérêts différents, et même potentiellement contradictoires, qui sont en jeu sur le territoire. Cependant, il n'y a pas de formule passe-partout: la production du TOD se fait en grande partie de façon incrémentale, en fonction des conditions sur le territoire. La mise en place du TOD n'exige pas nécessairement l'invention d'un nouveau cadre d'action et peut se faire en grande partie a l'aide des outils existants. Toutefois, pour pouvoir se servir pleinement de ces outils, le leadership des acteurs clé s'avere crucial.
Resumo:
Cette thèse croise les concepts de planification, de gouvernance et de transit-oriented development (TOD) par une étude de la production, de la mise en débat et de l'adoption du plan métropolitain d'aménagement et de développement (PMAD) de la Communauté métropolitaine de Montréal (CMM). Elle expose les résultats de quatre années de recherche qualitative sur les impacts de l'épisode du PMAD et de la stratégie TOD de la CMM sur les pratiques planificatrices et les processus décisionnels du Grand Montréal à l'échelle métropolitaine. Elle révèle que la planification métropolitaine et l'objectif de coordination du transport et de l'aménagement en général ainsi que le PMAD et le concept de TOD en particulier y sont des instruments de gouvernance. Les chapitres 2, 3 et 4 présentent la problématique, le terrain d'enquête et la démarche méthodologique de cette recherche. Le chapitre 5 relate l'épisode du PMAD en analysant son contenu, les procédures par lesquelles la CMM l'a produit, mis en débat et adopté, les réactions des parties prenantes de la région quant à ces aspects et la façon dont elles comptent assurer le suivi de sa mise en œuvre. Le chapitre 6 illustre comment cet épisode a fait du PMAD un instrument de gouvernance pour le Grand Montréal en décortiquant le rôle de la participation publique, des médias, des acteurs des milieux régional et local, des élus, de la CMM et de la société civile de la région au sein de ce processus de changement de registre de la planification et de la gouvernance les déployant sur des bases plus stratégiques et collaboratives. Le chapitre 7 montre que cet épisode a aussi fait du TOD un instrument de gouvernance pour le Grand Montréal en détaillant les tenants et aboutissants du processus d'appropriation, de marchand(is)age et d'instrumentalisation du concept par les élites politiques et techniques à des fins de marketing territorial et de construction de capital politique ouvrant la voie à la stabilisation d'une gouvernance en matière d'aménagement métropolitain. Il se dégage de cette thèse que ces profondes transformations que subissent actuellement la planification et la gouvernance exacerbent le caractère symbiotique de la relation qui les unit.
Resumo:
Hevea latex is a natural biological liquid of very complex composition .Besides rubber hydrocarbons,it contains many proteinous and resinous substances,carbohydrates,inorganic matter,water,and others.The Dry Rubber Content (DRC) of latex varies according to season, tapping system,weather,soil conditions ,clone,age of the tree etc. The true DRC of the latex must be determined to ensure fair prices for the latex during commercial exchange.The DRC of Hevea latex is a very familiar term to all in the rubber industry.It has been the basis for incentive payments to tappers who bring in more than the daily agreed poundage of latex.It is an important parameter for rubber and latex processing industries for automation and verious decesion making processes.This thesis embodies the efforts made by me to determine the DRC of rubber latex following different analytical tools such as MIR absorption,thermal analysis.dielectric spectroscopy and NIR reflectance.The rubber industry is still Looking for a compact instrument that is accurate economical,easy to use and environment friendly.I hope the results presented in this thesis will help to realise this goal in the near future.
Resumo:
The object of this study is to identify the learning styles (LS) used by the students of the subject of physiology of the exercise of the program of Physiotherapy, with the purpose of establishing a direct relationship later on between the learning styles and the possible pedagogic strategies that but they favor the compression of the physiology of the exercise 48 subject of second and third year of career they were interviewed through the instrument standardized compound number (CHAEA). This study carried out an analysis descriptive and of typical deviation of the data. They were differences statistically significant in the styles of active and reflexive learning, in front of the Theoretical and pragmatic styles what puts in evidence the necessity to generate pedagogic strategies inside the subject that this chord with the tendency of the active and reflexive learning of the students.