904 resultados para Surgical instruments and apparatus
Resumo:
PURPOSE: To determine the prevalence and impact on vision and visual function of ocular comorbidities in a rural Chinese cataract surgical program, and to devise strategies to reduce their associated burden. DESIGN: Cross-sectional cohort study. PARTICIPANTS: Persons undergoing cataract surgery by one of two recently trained local surgeons at a government-run village-level hospital in rural Guangdong between August 8 and December 31, 2005. INTERVENTIONS: Eligible subjects were invited to return for a comprehensive ocular examination and visual function interview 10 to 14 months after surgery. Prevalent ocular comorbid conditions were identified. MAIN OUTCOME MEASURES: Presenting and best-corrected vision, visual function, and treatability of the comorbidity. RESULTS: Of 313 persons operated within the study window, 242 (77%) could be contacted by telephone; study examinations and interviews were performed on 176 (74%). Examined subjects had a mean age of 69.4+/-10.5 years, 116 (66%) were female, and 149 (85%) had been blind (presenting vision < or = 6/60) in the operative eye before surgery. Among unoperated eyes, 89 of 109 (81.7%) had > or =1 ocular comorbidities, whereas for operated eyes the corresponding proportion was 72 of 211 (34.1%). The leading comorbidity among operated eyes was refractive error (43/72 [59.7%]), followed by glaucoma/glaucoma suspect (14/72 [19.4%]), whereas for unoperated eyes, it was cataract (80/92 [87.0%]), followed by refractive error (12/92 [13.0%]). Among operated eyes with comorbidities, 90.3% (65/72) had > or =1 comorbidities that were treatable. In separate models adjusting for age and gender, persons with > or =1 comorbidities in the operated eye had significantly worse presenting vision (P<0.001) than those without such findings, but visual function (P = 0.197) and satisfaction with surgery (P = 0.796) were unassociated with comorbidities. CONCLUSIONS: Ocular comorbidities are highly prevalent among persons undergoing cataract surgery in this rural Asian setting, and their presence is significantly associated with poorer visual outcomes. The fact that the great majority of comorbidities encountered in this program are treatable suggests that strategies to reduce their impact can be successful.
Resumo:
AIM: To study patient sources of knowledge about cataract surgical services, and strategies for financing surgery in rural China. DESIGN: Cross-sectional case series. METHODS: Patients undergoing cataract surgery by local surgeons in a government, village-level facility in Sanrao, Guangdong between 8 August and 31 December 2005 were examined and had standardised interviews an average of 12 months after surgery. RESULTS: Of 313 eligible patients, 239 (76%) completed the questionnaire. Subjects had a mean (SD) age of 69.9 (10.2) years, 36.4% (87/239) were male, and 87.0% (208/239) had been blind (presenting visual acuity < or = 6/60) before surgery. Word-of-mouth advertising was particularly important: 198 (85.0%) of the subjects knew a person who had undergone cataract surgery, of whom 191 (96.5%) had had cataract surgery at Sanrao itself. Over 70% of subjects (166/239) watched TV daily, whereas 80.0% (188/239) "never" read the newspaper. Nearly two-thirds of suggestions from participants (n = 211, 59.6%) favoured either TV advertisements or word-of-mouth to publicise the programme. While the son or daughter had paid for surgery in over 70% of cases (164/233), the patient's having paid without help was the sole predictor of undergoing second-eye surgery (OR 2.27 (95% CI 1.01 to 5.0, p = 0.04)). DISCUSSION: Strategies to increase uptake of cataract surgery in rural China may benefit from enhancing word-of-mouth advertising (such as with pseudophakic motivators), using television advertising where affordable, and micro-credit or other programmes to enable patients to pay their own fees, thus increasing uptake of second-eye surgery.
Resumo:
The last three decades have seen social enterprises in the United Kingdom pushed to the forefront of welfare delivery, workfare and area-based regeneration. For critics, this is repositioning the sector around a neoliberal politics that privileges marketization, state roll-back and disciplining community groups to become more self-reliant. Successive governments have developed bespoke products, fiscal instruments and intermediaries to enable and extend the social finance market. Such assemblages are critical to roll-out tactics, but they are also necessary and useful for more reformist understandings of economic alterity. The issue is not social finance itself but how it is used, which inevitably entangles social enterprises in a form of legitimation crises between the need to satisfy financial returns and at the same time keep community interests on board. This paper argues that social finance, how it is used, politically domesticated and achieves re-distributional outcomes is a necessary component of counter-hegemonic strategies. Such assemblages are as important to radical community development as they are to neoliberalism and the analysis concludes by highlighting the need to develop a better understanding of finance, the ethics of its use and tactical compromises in scaling it as an alternative to public and private markets.
Resumo:
Objective The aim of this study was to collate and compare data on the training of Specialty Registrars in Restorative Dentistry (StRs) in the management of head and neck cancer (HANC) patients across different training units within the UK and Ireland. Methods Current trainees were invited to complete an online questionnaire by the Specialty Registrars in Restorative Dentistry Group (SRRDG). Participants were asked to rate their confidence and experience of assessing and planning treatment for HANC patients, attending theatre alone and manufacturing surgical obturators, and providing implants for appropriate cases. Respondents were also asked to appraise clinical and didactic teaching at their unit, and to rate their confidence of passing a future Intercollegiate Specialty Fellowship Examination (ISFE)-station assessing knowledge of head and neck cancer. Results Responses were obtained from 21 StRs (n=21) training within all five countries of the British Isles. Most respondents were based in England (76%), with one StR in each of Scotland, Wales, Northern Ireland and the Republic of Ireland. A third (33%) were in their 5th year of training. Almost half of the StRs indicated that they were confident of independently assessing (48%) new patients with HANC, with fewer numbers reporting confidence in treatment planning (38%). The majority (52%) of respondents indicated that they were not confident of attending theatre alone and manufacturing a surgical obturator. A third (33%) rated their experience of treating HANC patients with implants as ‘poor’ or ‘very poor’, including three StRs in their 5th year of training. Less than one third (<33%) rated didactic teaching in maxillofacial prosthodontics at their unit as ‘good’ or ‘excellent’, and only 7 StRs indicated that they were confident of passing an ISFE-station focused on HANC. Conclusion Experience and training regarding patients with head and neck cancer is inconsistent for StRs across the UK and Ireland with a number of trainees reporting a lack of clinical exposure.
Resumo:
Objective: Adverse effects (AEs) of antipsychotic medication have important implications for patients and prescribers in terms of wellbeing, treatment adherence and quality of life. This review summarises strategies for collecting and reporting AE data across a representative literature sample to ascertain their rigour and comprehensiveness. Methods: A PsycINFO search, following PRISMA Statement guidelines, was conducted in English-language journals (1980–July 2014) using the following search string: (antipsychotic* OR neuroleptic*) AND (subjective effect OR subjective experience OR subjective response OR subjective mental alterations OR subjective tolerability OR subjective wellbeing OR patient perspective OR self-rated effects OR adverse effects OR side-effects). Of 7,825 articles, 384 were retained that reported quantified results for AEs of typical or atypical antipsychotics amongst transdiagnostic adult, adolescent, and child populations. Information extracted included: types of AEs reported; how AEs were assessed; assessment duration; assessment of the global impact of antipsychotic consumption on wellbeing; and conflict of interest due to industry sponsorship. Results: Neurological, metabolic, and sedation-related cognitive effects were reported most systematically relative to affective, anticholinergic, autonomic, cutaneous, hormonal, miscellaneous, and non-sedative cognitive effects. The impact of AEs on patient wellbeing was poorly assessed. Cross-sectional and prospective research designs yielded more comprehensive data about AE severity and prevalence than clinical or observational retrospective studies. 3 Conclusions: AE detection and classification can be improved through the use of standardised assessment instruments and consideration of subjective patient impact. Observational research can supplement information from clinical trials to improve the ecological validity of AE data.
Resumo:
The post-surgical period is often critical for infection acquisition. The combination of patient injury and environmental exposure through breached skin add risk to pre-existing conditions such as drug or depressed immunity. Several factors such as the period of hospital staying after surgery, base disease, age, immune system condition, hygiene policies, careless prophylactic drug administration and physical conditions of the healthcare centre may contribute to the acquisition of a nosocomial infection. A purulent wound can become complicated whenever antimicrobial therapy becomes compromised. In this pilot study, we analysed Enterobacteriaceae strains, the most significant gram-negative rods that may occur in post-surgical skin and soft tissue infections (SSTI) presenting reduced β-lactam susceptibility and those presenting extended-spectrum β-lactamases (ESBL). There is little information in our country regarding the relationship between β-lactam susceptibility, ESBL and development of resistant strains of microorganisms in SSTI. Our main results indicate Escherichia coli and Klebsiella spp. are among the most frequent enterobacteria (46% and 30% respectively) with ESBL production in 72% of Enterobacteriaceae isolates from SSTI. Moreover, coinfection occurred extensively, mainly with Pseudomonas aeruginosa and Methicillin-resistant Staphylococcus aureus (18% and 13%, respectively). These results suggest future research to explore if and how these associations are involved in the development of antibiotic resistance.
Resumo:
It is well recognized that professional musicians are at risk of hearing damage due to the exposure to high sound pressure levels during music playing. However, it is important to recognize that the musicians’ exposure may start early in the course of their training as students in the classroom and at home. Studies regarding sound exposure of music students and their hearing disorders are scarce and do not take into account important influencing variables. Therefore, this study aimed to describe sound level exposures of music students at different music styles, classes, and according to the instrument played. Further, this investigation attempted to analyze the perceptions of students in relation to exposure to loud music and consequent health risks, as well as to characterize preventive behaviors. The results showed that music students are exposed to high sound levels in the course of their academic activity. This exposure is potentiated by practice outside the school and other external activities. Differences were found between music style, instruments, and classes. Tinnitus, hyperacusis, diplacusis, and sound distortion were reported by the students. However, students were not entirely aware of the health risks related to exposure to high sound pressure levels. These findings reflect the importance of starting intervention in relation to noise risk reduction at an early stage, when musicians are commencing their activity as students.
Resumo:
Linear alkylbenzenes, LAB, formed by the Alel3 or HF catalyzed alkylation of benzene are common raw materials for surfactant manufacture. Normally they are sulphonated using S03 or oleum to give the corresponding linear alkylbenzene sulphonates In >95 % yield. As concern has grown about the environmental impact of surfactants,' questions have been raised about the trace levels of unreacted raw materials, linear alkylbenzenes and minor impurities present in them. With the advent of modem analytical instruments and techniques, namely GCIMS, the opportunity has arisen to identify the exact nature of these impurities and to determine the actual levels of them present in the commercial linear ,alkylbenzenes. The object of the proposed study was to separate, identify and quantify major and minor components (1-10%) in commercial linear alkylbenzenes. The focus of this study was on the structure elucidation and determination of impurities and on the qualitative determination of them in all analyzed linear alkylbenzene samples. A gas chromatography/mass spectrometry, (GCIMS) study was performed o~ five samples from the same manufacturer (different production dates) and then it was followed by the analyses of ten commercial linear alkylbenzenes from four different suppliers. All the major components, namely linear alkylbenzene isomers, followed the same elution pattern with the 2-phenyl isomer eluting last. The individual isomers were identified by interpretation of their electron impact and chemical ionization mass spectra. The percent isomer distribution was found to be different from sample to sample. Average molecular weights were calculated using two methods, GC and GCIMS, and compared with the results reported on the Certificate of Analyses (C.O.A.) provided by the manufacturers of commercial linear alkylbenzenes. The GC results in most cases agreed with the reported values, whereas GC/MS results were significantly lower, between 0.41 and 3.29 amu. The minor components, impurities such as branched alkylbenzenes and dialkyltetralins eluted according to their molecular weights. Their fragmentation patterns were studied using electron impact ionization mode and their molecular weight ions confirmed by a 'soft ionization technique', chemical ionization. The level of impurities present i~ the analyzed commercial linear alkylbenzenes was expressed as the percent of the total sample weight, as well as, in mg/g. The percent of impurities was observed to vary between 4.5 % and 16.8 % with the highest being in sample "I". Quantitation (mg/g) of impurities such as branched alkylbenzenes and dialkyltetralins was done using cis/trans-l,4,6,7-tetramethyltetralin as an internal standard. Samples were analyzed using .GC/MS system operating under full scan and single ion monitoring data acquisition modes. The latter data acquisition mode, which offers higher sensitivity, was used to analyze all samples under investigation for presence of linear dialkyltetralins. Dialkyltetralins were reported quantitatively, whereas branched alkylbenzenes were reported semi-qualitatively. The GC/MS method that was developed during the course of this study allowed identification of some other trace impurities present in commercial LABs. Compounds such as non-linear dialkyltetralins, dialkylindanes, diphenylalkanes and alkylnaphthalenes were identified but their detailed structure elucidation and the quantitation was beyond the scope of this study. However, further investigation of these compounds will be the subject of a future study.
Resumo:
It is well known that standard asymptotic theory is not valid or is extremely unreliable in models with identification problems or weak instruments [Dufour (1997, Econometrica), Staiger and Stock (1997, Econometrica), Wang and Zivot (1998, Econometrica), Stock and Wright (2000, Econometrica), Dufour and Jasiak (2001, International Economic Review)]. One possible way out consists here in using a variant of the Anderson-Rubin (1949, Ann. Math. Stat.) procedure. The latter, however, allows one to build exact tests and confidence sets only for the full vector of the coefficients of the endogenous explanatory variables in a structural equation, which in general does not allow for individual coefficients. This problem may in principle be overcome by using projection techniques [Dufour (1997, Econometrica), Dufour and Jasiak (2001, International Economic Review)]. AR-types are emphasized because they are robust to both weak instruments and instrument exclusion. However, these techniques can be implemented only by using costly numerical techniques. In this paper, we provide a complete analytic solution to the problem of building projection-based confidence sets from Anderson-Rubin-type confidence sets. The latter involves the geometric properties of “quadrics” and can be viewed as an extension of usual confidence intervals and ellipsoids. Only least squares techniques are required for building the confidence intervals. We also study by simulation how “conservative” projection-based confidence sets are. Finally, we illustrate the methods proposed by applying them to three different examples: the relationship between trade and growth in a cross-section of countries, returns to education, and a study of production functions in the U.S. economy.
Resumo:
The article sets out the concept of a State-to-State human transfer agreement of which extradition and deportation are specialised forms. Asylum sharing agreements are other variations which the article explores in more detail. Human transfer agreements always affect at least the right to liberty and the freedom of movement, but other rights will also be at issue to some extent. The article shows how human rights obligations limit State discretion in asylum sharing agreements and considers how past and present asylum sharing arrangements in Europe and North America deal with these limits, if at all. The article suggests changes in the way asylum sharing agreements are drafted: for example, providing for a treaty committee would allow existing agreements to better conform to international human rights instruments and would facilitate State compliance to their human rights obligations.
Resumo:
La dernière décennie a connu un intérêt croissant pour les problèmes posés par les variables instrumentales faibles dans la littérature économétrique, c’est-à-dire les situations où les variables instrumentales sont faiblement corrélées avec la variable à instrumenter. En effet, il est bien connu que lorsque les instruments sont faibles, les distributions des statistiques de Student, de Wald, du ratio de vraisemblance et du multiplicateur de Lagrange ne sont plus standard et dépendent souvent de paramètres de nuisance. Plusieurs études empiriques portant notamment sur les modèles de rendements à l’éducation [Angrist et Krueger (1991, 1995), Angrist et al. (1999), Bound et al. (1995), Dufour et Taamouti (2007)] et d’évaluation des actifs financiers (C-CAPM) [Hansen et Singleton (1982,1983), Stock et Wright (2000)], où les variables instrumentales sont faiblement corrélées avec la variable à instrumenter, ont montré que l’utilisation de ces statistiques conduit souvent à des résultats peu fiables. Un remède à ce problème est l’utilisation de tests robustes à l’identification [Anderson et Rubin (1949), Moreira (2002), Kleibergen (2003), Dufour et Taamouti (2007)]. Cependant, il n’existe aucune littérature économétrique sur la qualité des procédures robustes à l’identification lorsque les instruments disponibles sont endogènes ou à la fois endogènes et faibles. Cela soulève la question de savoir ce qui arrive aux procédures d’inférence robustes à l’identification lorsque certaines variables instrumentales supposées exogènes ne le sont pas effectivement. Plus précisément, qu’arrive-t-il si une variable instrumentale invalide est ajoutée à un ensemble d’instruments valides? Ces procédures se comportent-elles différemment? Et si l’endogénéité des variables instrumentales pose des difficultés majeures à l’inférence statistique, peut-on proposer des procédures de tests qui sélectionnent les instruments lorsqu’ils sont à la fois forts et valides? Est-il possible de proposer les proédures de sélection d’instruments qui demeurent valides même en présence d’identification faible? Cette thèse se focalise sur les modèles structurels (modèles à équations simultanées) et apporte des réponses à ces questions à travers quatre essais. Le premier essai est publié dans Journal of Statistical Planning and Inference 138 (2008) 2649 – 2661. Dans cet essai, nous analysons les effets de l’endogénéité des instruments sur deux statistiques de test robustes à l’identification: la statistique d’Anderson et Rubin (AR, 1949) et la statistique de Kleibergen (K, 2003), avec ou sans instruments faibles. D’abord, lorsque le paramètre qui contrôle l’endogénéité des instruments est fixe (ne dépend pas de la taille de l’échantillon), nous montrons que toutes ces procédures sont en général convergentes contre la présence d’instruments invalides (c’est-à-dire détectent la présence d’instruments invalides) indépendamment de leur qualité (forts ou faibles). Nous décrivons aussi des cas où cette convergence peut ne pas tenir, mais la distribution asymptotique est modifiée d’une manière qui pourrait conduire à des distorsions de niveau même pour de grands échantillons. Ceci inclut, en particulier, les cas où l’estimateur des double moindres carrés demeure convergent, mais les tests sont asymptotiquement invalides. Ensuite, lorsque les instruments sont localement exogènes (c’est-à-dire le paramètre d’endogénéité converge vers zéro lorsque la taille de l’échantillon augmente), nous montrons que ces tests convergent vers des distributions chi-carré non centrées, que les instruments soient forts ou faibles. Nous caractérisons aussi les situations où le paramètre de non centralité est nul et la distribution asymptotique des statistiques demeure la même que dans le cas des instruments valides (malgré la présence des instruments invalides). Le deuxième essai étudie l’impact des instruments faibles sur les tests de spécification du type Durbin-Wu-Hausman (DWH) ainsi que le test de Revankar et Hartley (1973). Nous proposons une analyse en petit et grand échantillon de la distribution de ces tests sous l’hypothèse nulle (niveau) et l’alternative (puissance), incluant les cas où l’identification est déficiente ou faible (instruments faibles). Notre analyse en petit échantillon founit plusieurs perspectives ainsi que des extensions des précédentes procédures. En effet, la caractérisation de la distribution de ces statistiques en petit échantillon permet la construction des tests de Monte Carlo exacts pour l’exogénéité même avec les erreurs non Gaussiens. Nous montrons que ces tests sont typiquement robustes aux intruments faibles (le niveau est contrôlé). De plus, nous fournissons une caractérisation de la puissance des tests, qui exhibe clairement les facteurs qui déterminent la puissance. Nous montrons que les tests n’ont pas de puissance lorsque tous les instruments sont faibles [similaire à Guggenberger(2008)]. Cependant, la puissance existe tant qu’au moins un seul instruments est fort. La conclusion de Guggenberger (2008) concerne le cas où tous les instruments sont faibles (un cas d’intérêt mineur en pratique). Notre théorie asymptotique sous les hypothèses affaiblies confirme la théorie en échantillon fini. Par ailleurs, nous présentons une analyse de Monte Carlo indiquant que: (1) l’estimateur des moindres carrés ordinaires est plus efficace que celui des doubles moindres carrés lorsque les instruments sont faibles et l’endogenéité modérée [conclusion similaire à celle de Kiviet and Niemczyk (2007)]; (2) les estimateurs pré-test basés sur les tests d’exogenété ont une excellente performance par rapport aux doubles moindres carrés. Ceci suggère que la méthode des variables instrumentales ne devrait être appliquée que si l’on a la certitude d’avoir des instruments forts. Donc, les conclusions de Guggenberger (2008) sont mitigées et pourraient être trompeuses. Nous illustrons nos résultats théoriques à travers des expériences de simulation et deux applications empiriques: la relation entre le taux d’ouverture et la croissance économique et le problème bien connu du rendement à l’éducation. Le troisième essai étend le test d’exogénéité du type Wald proposé par Dufour (1987) aux cas où les erreurs de la régression ont une distribution non-normale. Nous proposons une nouvelle version du précédent test qui est valide même en présence d’erreurs non-Gaussiens. Contrairement aux procédures de test d’exogénéité usuelles (tests de Durbin-Wu-Hausman et de Rvankar- Hartley), le test de Wald permet de résoudre un problème courant dans les travaux empiriques qui consiste à tester l’exogénéité partielle d’un sous ensemble de variables. Nous proposons deux nouveaux estimateurs pré-test basés sur le test de Wald qui performent mieux (en terme d’erreur quadratique moyenne) que l’estimateur IV usuel lorsque les variables instrumentales sont faibles et l’endogénéité modérée. Nous montrons également que ce test peut servir de procédure de sélection de variables instrumentales. Nous illustrons les résultats théoriques par deux applications empiriques: le modèle bien connu d’équation du salaire [Angist et Krueger (1991, 1999)] et les rendements d’échelle [Nerlove (1963)]. Nos résultats suggèrent que l’éducation de la mère expliquerait le décrochage de son fils, que l’output est une variable endogène dans l’estimation du coût de la firme et que le prix du fuel en est un instrument valide pour l’output. Le quatrième essai résout deux problèmes très importants dans la littérature économétrique. D’abord, bien que le test de Wald initial ou étendu permette de construire les régions de confiance et de tester les restrictions linéaires sur les covariances, il suppose que les paramètres du modèle sont identifiés. Lorsque l’identification est faible (instruments faiblement corrélés avec la variable à instrumenter), ce test n’est en général plus valide. Cet essai développe une procédure d’inférence robuste à l’identification (instruments faibles) qui permet de construire des régions de confiance pour la matrices de covariances entre les erreurs de la régression et les variables explicatives (possiblement endogènes). Nous fournissons les expressions analytiques des régions de confiance et caractérisons les conditions nécessaires et suffisantes sous lesquelles ils sont bornés. La procédure proposée demeure valide même pour de petits échantillons et elle est aussi asymptotiquement robuste à l’hétéroscédasticité et l’autocorrélation des erreurs. Ensuite, les résultats sont utilisés pour développer les tests d’exogénéité partielle robustes à l’identification. Les simulations Monte Carlo indiquent que ces tests contrôlent le niveau et ont de la puissance même si les instruments sont faibles. Ceci nous permet de proposer une procédure valide de sélection de variables instrumentales même s’il y a un problème d’identification. La procédure de sélection des instruments est basée sur deux nouveaux estimateurs pré-test qui combinent l’estimateur IV usuel et les estimateurs IV partiels. Nos simulations montrent que: (1) tout comme l’estimateur des moindres carrés ordinaires, les estimateurs IV partiels sont plus efficaces que l’estimateur IV usuel lorsque les instruments sont faibles et l’endogénéité modérée; (2) les estimateurs pré-test ont globalement une excellente performance comparés à l’estimateur IV usuel. Nous illustrons nos résultats théoriques par deux applications empiriques: la relation entre le taux d’ouverture et la croissance économique et le modèle de rendements à l’éducation. Dans la première application, les études antérieures ont conclu que les instruments n’étaient pas trop faibles [Dufour et Taamouti (2007)] alors qu’ils le sont fortement dans la seconde [Bound (1995), Doko et Dufour (2009)]. Conformément à nos résultats théoriques, nous trouvons les régions de confiance non bornées pour la covariance dans le cas où les instruments sont assez faibles.
Resumo:
La scoliose idiopathique de l’adolescent (SIA) est une déformation tri-dimensionelle du rachis. Son traitement comprend l’observation, l’utilisation de corsets pour limiter sa progression ou la chirurgie pour corriger la déformation squelettique et cesser sa progression. Le traitement chirurgical reste controversé au niveau des indications, mais aussi de la chirurgie à entreprendre. Malgré la présence de classifications pour guider le traitement de la SIA, une variabilité dans la stratégie opératoire intra et inter-observateur a été décrite dans la littérature. Cette variabilité s’accentue d’autant plus avec l’évolution des techniques chirurgicales et de l’instrumentation disponible. L’avancement de la technologie et son intégration dans le milieu médical a mené à l’utilisation d’algorithmes d’intelligence artificielle informatiques pour aider la classification et l’évaluation tridimensionnelle de la scoliose. Certains algorithmes ont démontré être efficace pour diminuer la variabilité dans la classification de la scoliose et pour guider le traitement. L’objectif général de cette thèse est de développer une application utilisant des outils d’intelligence artificielle pour intégrer les données d’un nouveau patient et les évidences disponibles dans la littérature pour guider le traitement chirurgical de la SIA. Pour cela une revue de la littérature sur les applications existantes dans l’évaluation de la SIA fut entreprise pour rassembler les éléments qui permettraient la mise en place d’une application efficace et acceptée dans le milieu clinique. Cette revue de la littérature nous a permis de réaliser que l’existence de “black box” dans les applications développées est une limitation pour l’intégration clinique ou la justification basée sur les évidence est essentielle. Dans une première étude nous avons développé un arbre décisionnel de classification de la scoliose idiopathique basé sur la classification de Lenke qui est la plus communément utilisée de nos jours mais a été critiquée pour sa complexité et la variabilité inter et intra-observateur. Cet arbre décisionnel a démontré qu’il permet d’augmenter la précision de classification proportionnellement au temps passé à classifier et ce indépendamment du niveau de connaissance sur la SIA. Dans une deuxième étude, un algorithme de stratégies chirurgicales basé sur des règles extraites de la littérature a été développé pour guider les chirurgiens dans la sélection de l’approche et les niveaux de fusion pour la SIA. Lorsque cet algorithme est appliqué à une large base de donnée de 1556 cas de SIA, il est capable de proposer une stratégie opératoire similaire à celle d’un chirurgien expert dans prêt de 70% des cas. Cette étude a confirmé la possibilité d’extraire des stratégies opératoires valides à l’aide d’un arbre décisionnel utilisant des règles extraites de la littérature. Dans une troisième étude, la classification de 1776 patients avec la SIA à l’aide d’une carte de Kohonen, un type de réseaux de neurone a permis de démontrer qu’il existe des scoliose typiques (scoliose à courbes uniques ou double thoracique) pour lesquelles la variabilité dans le traitement chirurgical varie peu des recommandations par la classification de Lenke tandis que les scolioses a courbes multiples ou tangentielles à deux groupes de courbes typiques étaient celles avec le plus de variation dans la stratégie opératoire. Finalement, une plateforme logicielle a été développée intégrant chacune des études ci-dessus. Cette interface logicielle permet l’entrée de données radiologiques pour un patient scoliotique, classifie la SIA à l’aide de l’arbre décisionnel de classification et suggère une approche chirurgicale basée sur l’arbre décisionnel de stratégies opératoires. Une analyse de la correction post-opératoire obtenue démontre une tendance, bien que non-statistiquement significative, à une meilleure balance chez les patients opérés suivant la stratégie recommandée par la plateforme logicielle que ceux aillant un traitement différent. Les études exposées dans cette thèse soulignent que l’utilisation d’algorithmes d’intelligence artificielle dans la classification et l’élaboration de stratégies opératoires de la SIA peuvent être intégrées dans une plateforme logicielle et pourraient assister les chirurgiens dans leur planification préopératoire.
Resumo:
Full Text / Article complet
Resumo:
In this thesis certain important aspects of heavy metal toxicity have been worked out. Recent studies have clearly shown that when experimental media contained more than one heavy metals, such metals could conspicuously influence the toxic reaction of the animals both in terms of quantity and nature. The experimental results available on individual metal toxicity show that, in majority of such results, unrealistically high concentrations of dissolved metals are involved. A remarkable number of factors have been shown to influence metal toxicity such as various environmental factors particularly temperature and salinity, the condition of the organism and the ability of some of the marine organisms to adapt to metallic contamination. Further, some of the more sensitive functions like embryonic and larval development, growth and fecundity, oxygen utilization and the function of various enzymes are found to be demonstrably sensitive in the presence of heavy metals. However, some of the above functions could be compensated for by adaptive process. If it is assumed that the presence of a single metal in higher concentrations could affect the life function of marine animals, more than one metal in the experimental media should manifest such effects in a greater scale. Commonly known as synergism or more than additivity, majority of heavy metals bring about synergistic reaction .The work presented in this thesis comprises lethal and sublethal toxicities of different salt forms of copper and silver on the brown mussel Perna indica. during the present investigation sublethal concentrations of copper and silver in their dent effects on survival, oxygen consumption, filtration, accumulation and depuration on Perna indica. The results are presented under different sections to make the presentation meaningful .
Resumo:
Swift heavy ion induced changes in microstructure and surface morphology of vapor deposited Fe–Ni based metallic glass thin films have been investigated by using atomic force microscopy, X-ray diffraction and transmission electron microscopy. Ion beam irradiation was carried out at room temperature with 103 MeV Au9+ beam with fluences ranging from 3 1011 to 3 1013 ions/cm2. The atomic force microscopy images were subjected to power spectral density analysis and roughness analysis using an image analysis software. Clusters were found in the image of as-deposited samples, which indicates that the film growth is dominated by the island growth mode. As-deposited films were amorphous as evidenced from X-ray diffraction; however, high resolution transmission electron microscopy measurements revealed a short range atomic order in the samples with crystallites of size around 3 nm embedded in an amorphous matrix. X-ray diffraction pattern of the as-deposited films after irradiation does not show any appreciable changes, indicating that the passage of swift heavy ions stabilizes the short range atomic ordering, or even creates further amorphization. The crystallinity of the as-deposited Fe–Ni based films was improved by thermal annealing, and diffraction results indicated that ion beam irradiation on annealed samples results in grain fragmentation. On bombarding annealed films, the surface roughness of the films decreased initially, then, at higher fluences it increased. The observed change in surface morphology of the irradiated films is attributed to the interplay between ion induced sputtering, volume diffusion and surface diffusion