66 resultados para whether court has power to extend time


Relevância:

100.00% 100.00%

Publicador:

Resumo:

L'utilisation efficace des systèmes géothermaux, la séquestration du CO2 pour limiter le changement climatique et la prévention de l'intrusion d'eau salée dans les aquifères costaux ne sont que quelques exemples qui démontrent notre besoin en technologies nouvelles pour suivre l'évolution des processus souterrains à partir de la surface. Un défi majeur est d'assurer la caractérisation et l'optimisation des performances de ces technologies à différentes échelles spatiales et temporelles. Les méthodes électromagnétiques (EM) d'ondes planes sont sensibles à la conductivité électrique du sous-sol et, par conséquent, à la conductivité électrique des fluides saturant la roche, à la présence de fractures connectées, à la température et aux matériaux géologiques. Ces méthodes sont régies par des équations valides sur de larges gammes de fréquences, permettant détudier de manières analogues des processus allant de quelques mètres sous la surface jusqu'à plusieurs kilomètres de profondeur. Néanmoins, ces méthodes sont soumises à une perte de résolution avec la profondeur à cause des propriétés diffusives du champ électromagnétique. Pour cette raison, l'estimation des modèles du sous-sol par ces méthodes doit prendre en compte des informations a priori afin de contraindre les modèles autant que possible et de permettre la quantification des incertitudes de ces modèles de façon appropriée. Dans la présente thèse, je développe des approches permettant la caractérisation statique et dynamique du sous-sol à l'aide d'ondes EM planes. Dans une première partie, je présente une approche déterministe permettant de réaliser des inversions répétées dans le temps (time-lapse) de données d'ondes EM planes en deux dimensions. Cette stratégie est basée sur l'incorporation dans l'algorithme d'informations a priori en fonction des changements du modèle de conductivité électrique attendus. Ceci est réalisé en intégrant une régularisation stochastique et des contraintes flexibles par rapport à la gamme des changements attendus en utilisant les multiplicateurs de Lagrange. J'utilise des normes différentes de la norme l2 pour contraindre la structure du modèle et obtenir des transitions abruptes entre les régions du model qui subissent des changements dans le temps et celles qui n'en subissent pas. Aussi, j'incorpore une stratégie afin d'éliminer les erreurs systématiques de données time-lapse. Ce travail a mis en évidence l'amélioration de la caractérisation des changements temporels par rapport aux approches classiques qui réalisent des inversions indépendantes à chaque pas de temps et comparent les modèles. Dans la seconde partie de cette thèse, j'adopte un formalisme bayésien et je teste la possibilité de quantifier les incertitudes sur les paramètres du modèle dans l'inversion d'ondes EM planes. Pour ce faire, je présente une stratégie d'inversion probabiliste basée sur des pixels à deux dimensions pour des inversions de données d'ondes EM planes et de tomographies de résistivité électrique (ERT) séparées et jointes. Je compare les incertitudes des paramètres du modèle en considérant différents types d'information a priori sur la structure du modèle et différentes fonctions de vraisemblance pour décrire les erreurs sur les données. Les résultats indiquent que la régularisation du modèle est nécessaire lorsqu'on a à faire à un large nombre de paramètres car cela permet d'accélérer la convergence des chaînes et d'obtenir des modèles plus réalistes. Cependent, ces contraintes mènent à des incertitudes d'estimations plus faibles, ce qui implique des distributions a posteriori qui ne contiennent pas le vrai modèledans les régions ou` la méthode présente une sensibilité limitée. Cette situation peut être améliorée en combinant des méthodes d'ondes EM planes avec d'autres méthodes complémentaires telles que l'ERT. De plus, je montre que le poids de régularisation des paramètres et l'écart-type des erreurs sur les données peuvent être retrouvés par une inversion probabiliste. Finalement, j'évalue la possibilité de caractériser une distribution tridimensionnelle d'un panache de traceur salin injecté dans le sous-sol en réalisant une inversion probabiliste time-lapse tridimensionnelle d'ondes EM planes. Etant donné que les inversions probabilistes sont très coûteuses en temps de calcul lorsque l'espace des paramètres présente une grande dimension, je propose une stratégie de réduction du modèle ou` les coefficients de décomposition des moments de Legendre du panache de traceur injecté ainsi que sa position sont estimés. Pour ce faire, un modèle de résistivité de base est nécessaire. Il peut être obtenu avant l'expérience time-lapse. Un test synthétique montre que la méthodologie marche bien quand le modèle de résistivité de base est caractérisé correctement. Cette méthodologie est aussi appliquée à un test de trac¸age par injection d'une solution saline et d'acides réalisé dans un système géothermal en Australie, puis comparée à une inversion time-lapse tridimensionnelle réalisée selon une approche déterministe. L'inversion probabiliste permet de mieux contraindre le panache du traceur salin gr^ace à la grande quantité d'informations a priori incluse dans l'algorithme. Néanmoins, les changements de conductivités nécessaires pour expliquer les changements observés dans les données sont plus grands que ce qu'expliquent notre connaissance actuelle des phénomenès physiques. Ce problème peut être lié à la qualité limitée du modèle de résistivité de base utilisé, indiquant ainsi que des efforts plus grands devront être fournis dans le futur pour obtenir des modèles de base de bonne qualité avant de réaliser des expériences dynamiques. Les études décrites dans cette thèse montrent que les méthodes d'ondes EM planes sont très utiles pour caractériser et suivre les variations temporelles du sous-sol sur de larges échelles. Les présentes approches améliorent l'évaluation des modèles obtenus, autant en termes d'incorporation d'informations a priori, qu'en termes de quantification d'incertitudes a posteriori. De plus, les stratégies développées peuvent être appliquées à d'autres méthodes géophysiques, et offrent une grande flexibilité pour l'incorporation d'informations additionnelles lorsqu'elles sont disponibles. -- The efficient use of geothermal systems, the sequestration of CO2 to mitigate climate change, and the prevention of seawater intrusion in coastal aquifers are only some examples that demonstrate the need for novel technologies to monitor subsurface processes from the surface. A main challenge is to assure optimal performance of such technologies at different temporal and spatial scales. Plane-wave electromagnetic (EM) methods are sensitive to subsurface electrical conductivity and consequently to fluid conductivity, fracture connectivity, temperature, and rock mineralogy. These methods have governing equations that are the same over a large range of frequencies, thus allowing to study in an analogous manner processes on scales ranging from few meters close to the surface down to several hundreds of kilometers depth. Unfortunately, they suffer from a significant resolution loss with depth due to the diffusive nature of the electromagnetic fields. Therefore, estimations of subsurface models that use these methods should incorporate a priori information to better constrain the models, and provide appropriate measures of model uncertainty. During my thesis, I have developed approaches to improve the static and dynamic characterization of the subsurface with plane-wave EM methods. In the first part of this thesis, I present a two-dimensional deterministic approach to perform time-lapse inversion of plane-wave EM data. The strategy is based on the incorporation of prior information into the inversion algorithm regarding the expected temporal changes in electrical conductivity. This is done by incorporating a flexible stochastic regularization and constraints regarding the expected ranges of the changes by using Lagrange multipliers. I use non-l2 norms to penalize the model update in order to obtain sharp transitions between regions that experience temporal changes and regions that do not. I also incorporate a time-lapse differencing strategy to remove systematic errors in the time-lapse inversion. This work presents improvements in the characterization of temporal changes with respect to the classical approach of performing separate inversions and computing differences between the models. In the second part of this thesis, I adopt a Bayesian framework and use Markov chain Monte Carlo (MCMC) simulations to quantify model parameter uncertainty in plane-wave EM inversion. For this purpose, I present a two-dimensional pixel-based probabilistic inversion strategy for separate and joint inversions of plane-wave EM and electrical resistivity tomography (ERT) data. I compare the uncertainties of the model parameters when considering different types of prior information on the model structure and different likelihood functions to describe the data errors. The results indicate that model regularization is necessary when dealing with a large number of model parameters because it helps to accelerate the convergence of the chains and leads to more realistic models. These constraints also lead to smaller uncertainty estimates, which imply posterior distributions that do not include the true underlying model in regions where the method has limited sensitivity. This situation can be improved by combining planewave EM methods with complimentary geophysical methods such as ERT. In addition, I show that an appropriate regularization weight and the standard deviation of the data errors can be retrieved by the MCMC inversion. Finally, I evaluate the possibility of characterizing the three-dimensional distribution of an injected water plume by performing three-dimensional time-lapse MCMC inversion of planewave EM data. Since MCMC inversion involves a significant computational burden in high parameter dimensions, I propose a model reduction strategy where the coefficients of a Legendre moment decomposition of the injected water plume and its location are estimated. For this purpose, a base resistivity model is needed which is obtained prior to the time-lapse experiment. A synthetic test shows that the methodology works well when the base resistivity model is correctly characterized. The methodology is also applied to an injection experiment performed in a geothermal system in Australia, and compared to a three-dimensional time-lapse inversion performed within a deterministic framework. The MCMC inversion better constrains the water plumes due to the larger amount of prior information that is included in the algorithm. The conductivity changes needed to explain the time-lapse data are much larger than what is physically possible based on present day understandings. This issue may be related to the base resistivity model used, therefore indicating that more efforts should be given to obtain high-quality base models prior to dynamic experiments. The studies described herein give clear evidence that plane-wave EM methods are useful to characterize and monitor the subsurface at a wide range of scales. The presented approaches contribute to an improved appraisal of the obtained models, both in terms of the incorporation of prior information in the algorithms and the posterior uncertainty quantification. In addition, the developed strategies can be applied to other geophysical methods, and offer great flexibility to incorporate additional information when available.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Abstract Background. In children, waist-for-height ratio (WHtR) has been proposed to identify subjects at higher risk of cardiovascular diseases. The utility of WHtR to identify children with elevated blood pressure (BP) is unclear. Design. Cross-sectional population-based study of schoolchildren. Methods. Weight, height, waist circumference and BP were measured in all sixth-grade schoolchildren of the canton de Vaud (Switzerland) in 2005/06. WHtR was computed as waist [cm]/height [cm]. Elevated BP was defined according to sex-, age- and height-specific US reference data. The area under the receiver operating characteristic curve (AUC) statistic was computed to compare the ability of body mass index (BMI) z-score and WHtR, alone or in combination, to identify children with elevated BP. Results. 5207 children participated (76% response) [2621 boys, 2586 girls; mean (± SD) age, 12.3 ± 0.5 years; range: 10.1-14.9]. The prevalence of elevated BP was 11%. Mean WHtR was 0.44 ± 0.05 (range: 0.29- 0.77) and 11% had high WHtR (> 0.5). BMI z-score and WHtR were strongly correlated (Spearman correlation coefficient r = 0.76). Both indices were positively associated with elevated BP. AUCs for elevated BP was relatively low for BMI z-score (0.62) or for WHtR (0.62), and was not substantially improved when both indices were considered together (0.63). Conclusions. The ability of BMI z-score or WHtR to identify children aged 10-14 with elevated BP was weak. Adding WHtR did not confer additional discriminative power to BMI alone. These findings do not support the measurement of WHtR in addition to BMI to identify children with elevated BP.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Goals: Adjuvant chemotherapy decisions in breast cancer are increasing based on the pathologist's assessment of the proliferation fraction in the tumor. Yet, how good and how reproducible are we pathologists at providing reliable Ki-67 readings on breast carcinomas. Exactly how to count and in which areas to count within a tumor remains inadequately standardized. The Swiss Working Group of Gyneco- and Breast Pathologists has tried to appreciate this dilemma and to propose ways to obtain more reproducible results.Methods: In a first phase, 5 pathologists evaluated Ki67 counts in 10 breast cancers by exact counting (500 cells) and by eyeballing. Pathologists were free to select the region in which Ki67 was evaluated. In a second phase 16 pathologists evaluated Ki-67 counts in 3 breast cancers also by exact counting and eyeballing, but in predefined fields of interest. In both phases, Ki67 was assessed in centrally immunostained slides (ZH) and on slides immunostained in the 11 participating laboratories. In a third phase, these same 16 pathologists were once again asked to read the 3 cases from phase 2, plus three new cases, and this time exact guidelines were provided as to what exactly is considered a Ki-67 positive nucleus.Results: Discordance of Ki67 assessment was due to each of the following 4 factors: (i) pathologists' divergent definitions of what counts as a positive nucleus (ii) the mode of assessment (counting vs. eyeballing), (iii) immunostaining technique/protocol/antibody, and (iv) the selection of the area in which to count.Conclusion: Providing guidelines as to where to count (representative field in the tumor periphery and omitting hot spots) and what nuclei to count (even faintly immunostained nuclei count as positive) reduces the discordance rates of Ki67 readings between pathologists. Laboratory technique is only of minor importance (even over a large antibody dilution range), and counting nuclei does not improve accuracy, but rather aggravates deviations from the group mean values.Disclosure of Interest: None Declared

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Background It has been hypothesized that children and adolescents might be more vulnerable to possible health effects from mobile phone exposure than adults. We investigated whether mobile phone use is associated with brain tumor risk among children and adolescents. Methods CEFALO is a multicenter case-control study conducted in Denmark, Sweden, Norway, and Switzerland that includes all children and adolescents aged 7-19 years who were diagnosed with a brain tumor between 2004 and 2008. We conducted interviews, in person, with 352 case patients (participation rate: 83%) and 646 control subjects (participation rate: 71%) and their parents. Control subjects were randomly selected from population registries and matched by age, sex, and geographical region. We asked about mobile phone use and included mobile phone operator records when available. Odds ratios (ORs) for brain tumor risk and 95% confidence intervals (CIs) were calculated using conditional logistic regression models. Results Regular users of mobile phones were not statistically significantly more likely to have been diagnosed with brain tumors compared with nonusers (OR = 1.36; 95% CI = 0.92 to 2.02). Children who started to use mobile phones at least 5 years ago were not at increased risk compared with those who had never regularly used mobile phones (OR = 1.26, 95% CI = 0.70 to 2.28). In a subset of study participants for whom operator recorded data were available, brain tumor risk was related to the time elapsed since the mobile phone subscription was started but not to amount of use. No increased risk of brain tumors was observed for brain areas receiving the highest amount of exposure. Conclusion The absence of an exposure-response relationship either in terms of the amount of mobile phone use or by localization of the brain tumor argues against a causal association.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

BACKGROUND: Clinical practice does not always reflect best practice and evidence, partly because of unconscious acts of omission, information overload, or inaccessible information. Reminders may help clinicians overcome these problems by prompting the doctor to recall information that they already know or would be expected to know and by providing information or guidance in a more accessible and relevant format, at a particularly appropriate time. OBJECTIVES: To evaluate the effects of reminders automatically generated through a computerized system and delivered on paper to healthcare professionals on processes of care (related to healthcare professionals' practice) and outcomes of care (related to patients' health condition). SEARCH METHODS: For this update the EPOC Trials Search Co-ordinator searched the following databases between June 11-19, 2012: The Cochrane Central Register of Controlled Trials (CENTRAL) and Cochrane Library (Economics, Methods, and Health Technology Assessment sections), Issue 6, 2012; MEDLINE, OVID (1946- ), Daily Update, and In-process; EMBASE, Ovid (1947- ); CINAHL, EbscoHost (1980- ); EPOC Specialised Register, Reference Manager, and INSPEC, Engineering Village. The authors reviewed reference lists of related reviews and studies.  SELECTION CRITERIA: We included individual or cluster-randomized controlled trials (RCTs) and non-randomized controlled trials (NRCTs) that evaluated the impact of computer-generated reminders delivered on paper to healthcare professionals on processes and/or outcomes of care. DATA COLLECTION AND ANALYSIS: Review authors working in pairs independently screened studies for eligibility and abstracted data. We contacted authors to obtain important missing information for studies that were published within the last 10 years. For each study, we extracted the primary outcome when it was defined or calculated the median effect size across all reported outcomes. We then calculated the median absolute improvement and interquartile range (IQR) in process adherence across included studies using the primary outcome or median outcome as representative outcome. MAIN RESULTS: In the 32 included studies, computer-generated reminders delivered on paper to healthcare professionals achieved moderate improvement in professional practices, with a median improvement of processes of care of 7.0% (IQR: 3.9% to 16.4%). Implementing reminders alone improved care by 11.2% (IQR 6.5% to 19.6%) compared with usual care, while implementing reminders in addition to another intervention improved care by 4.0% only (IQR 3.0% to 6.0%) compared with the other intervention. The quality of evidence for these comparisons was rated as moderate according to the GRADE approach. Two reminder features were associated with larger effect sizes: providing space on the reminder for provider to enter a response (median 13.7% versus 4.3% for no response, P value = 0.01) and providing an explanation of the content or advice on the reminder (median 12.0% versus 4.2% for no explanation, P value = 0.02). Median improvement in processes of care also differed according to the behaviour the reminder targeted: for instance, reminders to vaccinate improved processes of care by 13.1% (IQR 12.2% to 20.7%) compared with other targeted behaviours. In the only study that had sufficient power to detect a clinically significant effect on outcomes of care, reminders were not associated with significant improvements. AUTHORS' CONCLUSIONS: There is moderate quality evidence that computer-generated reminders delivered on paper to healthcare professionals achieve moderate improvement in process of care. Two characteristics emerged as significant predictors of improvement: providing space on the reminder for a response from the clinician and providing an explanation of the reminder's content or advice. The heterogeneity of the reminder interventions included in this review also suggests that reminders can improve care in various settings under various conditions.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Ces dernières années, de nombreuses recherches ont mis en évidence les effets toxiques des micropolluants organiques pour les espèces de nos lacs et rivières. Cependant, la plupart de ces études se sont focalisées sur la toxicité des substances individuelles, alors que les organismes sont exposés tous les jours à des milliers de substances en mélange. Or les effets de ces cocktails ne sont pas négligeables. Cette thèse de doctorat s'est ainsi intéressée aux modèles permettant de prédire le risque environnemental de ces cocktails pour le milieu aquatique. Le principal objectif a été d'évaluer le risque écologique des mélanges de substances chimiques mesurées dans le Léman, mais aussi d'apporter un regard critique sur les méthodologies utilisées afin de proposer certaines adaptations pour une meilleure estimation du risque. Dans la première partie de ce travail, le risque des mélanges de pesticides et médicaments pour le Rhône et pour le Léman a été établi en utilisant des approches envisagées notamment dans la législation européenne. Il s'agit d'approches de « screening », c'est-à-dire permettant une évaluation générale du risque des mélanges. Une telle approche permet de mettre en évidence les substances les plus problématiques, c'est-à-dire contribuant le plus à la toxicité du mélange. Dans notre cas, il s'agit essentiellement de 4 pesticides. L'étude met également en évidence que toutes les substances, même en trace infime, contribuent à l'effet du mélange. Cette constatation a des implications en terme de gestion de l'environnement. En effet, ceci implique qu'il faut réduire toutes les sources de polluants, et pas seulement les plus problématiques. Mais l'approche proposée présente également un biais important au niveau conceptuel, ce qui rend son utilisation discutable, en dehors d'un screening, et nécessiterait une adaptation au niveau des facteurs de sécurité employés. Dans une deuxième partie, l'étude s'est portée sur l'utilisation des modèles de mélanges dans le calcul de risque environnemental. En effet, les modèles de mélanges ont été développés et validés espèce par espèce, et non pour une évaluation sur l'écosystème en entier. Leur utilisation devrait donc passer par un calcul par espèce, ce qui est rarement fait dû au manque de données écotoxicologiques à disposition. Le but a été donc de comparer, avec des valeurs générées aléatoirement, le calcul de risque effectué selon une méthode rigoureuse, espèce par espèce, avec celui effectué classiquement où les modèles sont appliqués sur l'ensemble de la communauté sans tenir compte des variations inter-espèces. Les résultats sont dans la majorité des cas similaires, ce qui valide l'approche utilisée traditionnellement. En revanche, ce travail a permis de déterminer certains cas où l'application classique peut conduire à une sous- ou sur-estimation du risque. Enfin, une dernière partie de cette thèse s'est intéressée à l'influence que les cocktails de micropolluants ont pu avoir sur les communautés in situ. Pour ce faire, une approche en deux temps a été adoptée. Tout d'abord la toxicité de quatorze herbicides détectés dans le Léman a été déterminée. Sur la période étudiée, de 2004 à 2009, cette toxicité due aux herbicides a diminué, passant de 4% d'espèces affectées à moins de 1%. Ensuite, la question était de savoir si cette diminution de toxicité avait un impact sur le développement de certaines espèces au sein de la communauté des algues. Pour ce faire, l'utilisation statistique a permis d'isoler d'autres facteurs pouvant avoir une influence sur la flore, comme la température de l'eau ou la présence de phosphates, et ainsi de constater quelles espèces se sont révélées avoir été influencées, positivement ou négativement, par la diminution de la toxicité dans le lac au fil du temps. Fait intéressant, une partie d'entre-elles avait déjà montré des comportements similaires dans des études en mésocosmes. En conclusion, ce travail montre qu'il existe des modèles robustes pour prédire le risque des mélanges de micropolluants sur les espèces aquatiques, et qu'ils peuvent être utilisés pour expliquer le rôle des substances dans le fonctionnement des écosystèmes. Toutefois, ces modèles ont bien sûr des limites et des hypothèses sous-jacentes qu'il est important de considérer lors de leur application. - Depuis plusieurs années, les risques que posent les micropolluants organiques pour le milieu aquatique préoccupent grandement les scientifiques ainsi que notre société. En effet, de nombreuses recherches ont mis en évidence les effets toxiques que peuvent avoir ces substances chimiques sur les espèces de nos lacs et rivières, quand elles se retrouvent exposées à des concentrations aiguës ou chroniques. Cependant, la plupart de ces études se sont focalisées sur la toxicité des substances individuelles, c'est à dire considérées séparément. Actuellement, il en est de même dans les procédures de régulation européennes, concernant la partie évaluation du risque pour l'environnement d'une substance. Or, les organismes sont exposés tous les jours à des milliers de substances en mélange, et les effets de ces "cocktails" ne sont pas négligeables. L'évaluation du risque écologique que pose ces mélanges de substances doit donc être abordé par de la manière la plus appropriée et la plus fiable possible. Dans la première partie de cette thèse, nous nous sommes intéressés aux méthodes actuellement envisagées à être intégrées dans les législations européennes pour l'évaluation du risque des mélanges pour le milieu aquatique. Ces méthodes sont basées sur le modèle d'addition des concentrations, avec l'utilisation des valeurs de concentrations des substances estimées sans effet dans le milieu (PNEC), ou à partir des valeurs des concentrations d'effet (CE50) sur certaines espèces d'un niveau trophique avec la prise en compte de facteurs de sécurité. Nous avons appliqué ces méthodes à deux cas spécifiques, le lac Léman et le Rhône situés en Suisse, et discuté les résultats de ces applications. Ces premières étapes d'évaluation ont montré que le risque des mélanges pour ces cas d'étude atteint rapidement une valeur au dessus d'un seuil critique. Cette valeur atteinte est généralement due à deux ou trois substances principales. Les procédures proposées permettent donc d'identifier les substances les plus problématiques pour lesquelles des mesures de gestion, telles que la réduction de leur entrée dans le milieu aquatique, devraient être envisagées. Cependant, nous avons également constaté que le niveau de risque associé à ces mélanges de substances n'est pas négligeable, même sans tenir compte de ces substances principales. En effet, l'accumulation des substances, même en traces infimes, atteint un seuil critique, ce qui devient plus difficile en terme de gestion du risque. En outre, nous avons souligné un manque de fiabilité dans ces procédures, qui peuvent conduire à des résultats contradictoires en terme de risque. Ceci est lié à l'incompatibilité des facteurs de sécurité utilisés dans les différentes méthodes. Dans la deuxième partie de la thèse, nous avons étudié la fiabilité de méthodes plus avancées dans la prédiction de l'effet des mélanges pour les communautés évoluant dans le système aquatique. Ces méthodes reposent sur le modèle d'addition des concentrations (CA) ou d'addition des réponses (RA) appliqués sur les courbes de distribution de la sensibilité des espèces (SSD) aux substances. En effet, les modèles de mélanges ont été développés et validés pour être appliqués espèce par espèce, et non pas sur plusieurs espèces agrégées simultanément dans les courbes SSD. Nous avons ainsi proposé une procédure plus rigoureuse, pour l'évaluation du risque d'un mélange, qui serait d'appliquer d'abord les modèles CA ou RA à chaque espèce séparément, et, dans une deuxième étape, combiner les résultats afin d'établir une courbe SSD du mélange. Malheureusement, cette méthode n'est pas applicable dans la plupart des cas, car elle nécessite trop de données généralement indisponibles. Par conséquent, nous avons comparé, avec des valeurs générées aléatoirement, le calcul de risque effectué selon cette méthode plus rigoureuse, avec celle effectuée traditionnellement, afin de caractériser la robustesse de cette approche qui consiste à appliquer les modèles de mélange sur les courbes SSD. Nos résultats ont montré que l'utilisation de CA directement sur les SSDs peut conduire à une sous-estimation de la concentration du mélange affectant 5 % ou 50% des espèces, en particulier lorsque les substances présentent un grand écart- type dans leur distribution de la sensibilité des espèces. L'application du modèle RA peut quant à lui conduire à une sur- ou sous-estimations, principalement en fonction de la pente des courbes dose- réponse de chaque espèce composant les SSDs. La sous-estimation avec RA devient potentiellement importante lorsque le rapport entre la EC50 et la EC10 de la courbe dose-réponse des espèces est plus petit que 100. Toutefois, la plupart des substances, selon des cas réels, présentent des données d' écotoxicité qui font que le risque du mélange calculé par la méthode des modèles appliqués directement sur les SSDs reste cohérent et surestimerait plutôt légèrement le risque. Ces résultats valident ainsi l'approche utilisée traditionnellement. Néanmoins, il faut garder à l'esprit cette source d'erreur lorsqu'on procède à une évaluation du risque d'un mélange avec cette méthode traditionnelle, en particulier quand les SSD présentent une distribution des données en dehors des limites déterminées dans cette étude. Enfin, dans la dernière partie de cette thèse, nous avons confronté des prédictions de l'effet de mélange avec des changements biologiques observés dans l'environnement. Dans cette étude, nous avons utilisé des données venant d'un suivi à long terme d'un grand lac européen, le lac Léman, ce qui offrait la possibilité d'évaluer dans quelle mesure la prédiction de la toxicité des mélanges d'herbicide expliquait les changements dans la composition de la communauté phytoplanctonique. Ceci à côté d'autres paramètres classiques de limnologie tels que les nutriments. Pour atteindre cet objectif, nous avons déterminé la toxicité des mélanges sur plusieurs années de 14 herbicides régulièrement détectés dans le lac, en utilisant les modèles CA et RA avec les courbes de distribution de la sensibilité des espèces. Un gradient temporel de toxicité décroissant a pu être constaté de 2004 à 2009. Une analyse de redondance et de redondance partielle, a montré que ce gradient explique une partie significative de la variation de la composition de la communauté phytoplanctonique, même après avoir enlevé l'effet de toutes les autres co-variables. De plus, certaines espèces révélées pour avoir été influencées, positivement ou négativement, par la diminution de la toxicité dans le lac au fil du temps, ont montré des comportements similaires dans des études en mésocosmes. On peut en conclure que la toxicité du mélange herbicide est l'un des paramètres clés pour expliquer les changements de phytoplancton dans le lac Léman. En conclusion, il existe diverses méthodes pour prédire le risque des mélanges de micropolluants sur les espèces aquatiques et celui-ci peut jouer un rôle dans le fonctionnement des écosystèmes. Toutefois, ces modèles ont bien sûr des limites et des hypothèses sous-jacentes qu'il est important de considérer lors de leur application, avant d'utiliser leurs résultats pour la gestion des risques environnementaux. - For several years now, the scientists as well as the society is concerned by the aquatic risk organic micropollutants may pose. Indeed, several researches have shown the toxic effects these substances may induce on organisms living in our lakes or rivers, especially when they are exposed to acute or chronic concentrations. However, most of the studies focused on the toxicity of single compounds, i.e. considered individually. The same also goes in the current European regulations concerning the risk assessment procedures for the environment of these substances. But aquatic organisms are typically exposed every day simultaneously to thousands of organic compounds. The toxic effects resulting of these "cocktails" cannot be neglected. The ecological risk assessment of mixtures of such compounds has therefore to be addressed by scientists in the most reliable and appropriate way. In the first part of this thesis, the procedures currently envisioned for the aquatic mixture risk assessment in European legislations are described. These methodologies are based on the mixture model of concentration addition and the use of the predicted no effect concentrations (PNEC) or effect concentrations (EC50) with assessment factors. These principal approaches were applied to two specific case studies, Lake Geneva and the River Rhône in Switzerland, including a discussion of the outcomes of such applications. These first level assessments showed that the mixture risks for these studied cases exceeded rapidly the critical value. This exceeding is generally due to two or three main substances. The proposed procedures allow therefore the identification of the most problematic substances for which management measures, such as a reduction of the entrance to the aquatic environment, should be envisioned. However, it was also showed that the risk levels associated with mixtures of compounds are not negligible, even without considering these main substances. Indeed, it is the sum of the substances that is problematic, which is more challenging in term of risk management. Moreover, a lack of reliability in the procedures was highlighted, which can lead to contradictory results in terms of risk. This result is linked to the inconsistency in the assessment factors applied in the different methods. In the second part of the thesis, the reliability of the more advanced procedures to predict the mixture effect to communities in the aquatic system were investigated. These established methodologies combine the model of concentration addition (CA) or response addition (RA) with species sensitivity distribution curves (SSD). Indeed, the mixture effect predictions were shown to be consistent only when the mixture models are applied on a single species, and not on several species simultaneously aggregated to SSDs. Hence, A more stringent procedure for mixture risk assessment is proposed, that would be to apply first the CA or RA models to each species separately and, in a second step, to combine the results to build an SSD for a mixture. Unfortunately, this methodology is not applicable in most cases, because it requires large data sets usually not available. Therefore, the differences between the two methodologies were studied with datasets created artificially to characterize the robustness of the traditional approach applying models on species sensitivity distribution. The results showed that the use of CA on SSD directly might lead to underestimations of the mixture concentration affecting 5% or 50% of species, especially when substances present a large standard deviation of the distribution from the sensitivity of the species. The application of RA can lead to over- or underestimates, depending mainly on the slope of the dose-response curves of the individual species. The potential underestimation with RA becomes important when the ratio between the EC50 and the EC10 for the dose-response curve of the species composing the SSD are smaller than 100. However, considering common real cases of ecotoxicity data for substances, the mixture risk calculated by the methodology applying mixture models directly on SSDs remains consistent and would rather slightly overestimate the risk. These results can be used as a theoretical validation of the currently applied methodology. Nevertheless, when assessing the risk of mixtures, one has to keep in mind this source of error with this classical methodology, especially when SSDs present a distribution of the data outside the range determined in this study Finally, in the last part of this thesis, we confronted the mixture effect predictions with biological changes observed in the environment. In this study, long-term monitoring of a European great lake, Lake Geneva, provides the opportunity to assess to what extent the predicted toxicity of herbicide mixtures explains the changes in the composition of the phytoplankton community next to other classical limnology parameters such as nutrients. To reach this goal, the gradient of the mixture toxicity of 14 herbicides regularly detected in the lake was calculated, using concentration addition and response addition models. A decreasing temporal gradient of toxicity was observed from 2004 to 2009. Redundancy analysis and partial redundancy analysis showed that this gradient explains a significant portion of the variation in phytoplankton community composition, even when having removed the effect of all other co-variables. Moreover, some species that were revealed to be influenced positively or negatively, by the decrease of toxicity in the lake over time, showed similar behaviors in mesocosms studies. It could be concluded that the herbicide mixture toxicity is one of the key parameters to explain phytoplankton changes in Lake Geneva. To conclude, different methods exist to predict the risk of mixture in the ecosystems. But their reliability varies depending on the underlying hypotheses. One should therefore carefully consider these hypotheses, as well as the limits of the approaches, before using the results for environmental risk management

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Delta(9)-Tetrahydrocannabinol (THC) is frequently found in the blood of drivers suspected of driving under the influence of cannabis or involved in traffic crashes. The present study used a double-blind crossover design to compare the effects of medium (16.5 mg THC) and high doses (45.7 mg THC) of hemp milk decoctions or of a medium dose of dronabinol (20 mg synthetic THC, Marinol on several skills required for safe driving. Forensic interpretation of cannabinoids blood concentrations were attempted using the models proposed by Daldrup (cannabis influencing factor or CIF) and Huestis and coworkers. First, the time concentration-profiles of THC, 11-hydroxy-Delta(9)-tetrahydrocannabinol (11-OH-THC) (active metabolite of THC), and 11-nor-9-carboxy-Delta(9)-tetrahydrocannabinol (THCCOOH) in whole blood were determined by gas chromatography-mass spectrometry-negative ion chemical ionization. Compared to smoking studies, relatively low concentrations were measured in blood. The highest mean THC concentration (8.4 ng/mL) was achieved 1 h after ingestion of the strongest decoction. Mean maximum 11-OH-THC level (12.3 ng/mL) slightly exceeded that of THC. THCCOOH reached its highest mean concentration (66.2 ng/mL) 2.5-5.5 h after intake. Individual blood levels showed considerable intersubject variability. The willingness to drive was influenced by the importance of the requested task. Under significant cannabinoids influence, the participants refused to drive when they were asked whether they would agree to accomplish several unimportant tasks, (e.g., driving a friend to a party). Most of the participants reported a significant feeling of intoxication and did not appreciate the effects, notably those felt after drinking the strongest decoction. Road sign and tracking testing revealed obvious and statistically significant differences between placebo and treatments. A marked impairment was detected after ingestion of the strongest decoction. A CIF value, which relies on the molar ratio of main active to inactive cannabinoids, greater than 10 was found to correlate with a strong feeling of intoxication. It also matched with a significant decrease in the willingness to drive, and it matched also with a significant impairment in tracking performances. The mathematic model II proposed by Huestis et al. (1992) provided at best a rough estimate of the time of oral administration with 27% of actual values being out of range of the 95% confidence interval. The sum of THC and 11-OH-THC blood concentrations provided a better estimate of impairment than THC alone. This controlled clinical study points out the negative influence on fitness to drive after medium or high dose oral THC or dronabinol.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

For the last 2 decades, supertree reconstruction has been an active field of research and has seen the development of a large number of major algorithms. Because of the growing popularity of the supertree methods, it has become necessary to evaluate the performance of these algorithms to determine which are the best options (especially with regard to the supermatrix approach that is widely used). In this study, seven of the most commonly used supertree methods are investigated by using a large empirical data set (in terms of number of taxa and molecular markers) from the worldwide flowering plant family Sapindaceae. Supertree methods were evaluated using several criteria: similarity of the supertrees with the input trees, similarity between the supertrees and the total evidence tree, level of resolution of the supertree and computational time required by the algorithm. Additional analyses were also conducted on a reduced data set to test if the performance levels were affected by the heuristic searches rather than the algorithms themselves. Based on our results, two main groups of supertree methods were identified: on one hand, the matrix representation with parsimony (MRP), MinFlip, and MinCut methods performed well according to our criteria, whereas the average consensus, split fit, and most similar supertree methods showed a poorer performance or at least did not behave the same way as the total evidence tree. Results for the super distance matrix, that is, the most recent approach tested here, were promising with at least one derived method performing as well as MRP, MinFlip, and MinCut. The output of each method was only slightly improved when applied to the reduced data set, suggesting a correct behavior of the heuristic searches and a relatively low sensitivity of the algorithms to data set sizes and missing data. Results also showed that the MRP analyses could reach a high level of quality even when using a simple heuristic search strategy, with the exception of MRP with Purvis coding scheme and reversible parsimony. The future of supertrees lies in the implementation of a standardized heuristic search for all methods and the increase in computing power to handle large data sets. The latter would prove to be particularly useful for promising approaches such as the maximum quartet fit method that yet requires substantial computing power.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

BACKGROUND: Little information is available on resistance to anti-malarial drugs in the Solomon Islands (SI). The analysis of single nucleotide polymorphisms (SNPs) in drug resistance associated parasite genes is a potential alternative to classical time- and resource-consuming in vivo studies to monitor drug resistance. Mutations in pfmdr1 and pfcrt were shown to indicate chloroquine (CQ) resistance, mutations in pfdhfr and pfdhps indicate sulphadoxine-pyrimethamine (SP) resistance, and mutations in pfATPase6 indicate resistance to artemisinin derivatives. METHODS: The relationship between the rate of treatment failure among 25 symptomatic Plasmodium falciparum-infected patients presenting at the clinic and the pattern of resistance-associated SNPs in P. falciparum infecting 76 asymptomatic individuals from the surrounding population was investigated. The study was conducted in the SI in 2004. Patients presenting at a local clinic with microscopically confirmed P. falciparum malaria were recruited and treated with CQ+SP. Rates of treatment failure were estimated during a 28-day follow-up period. In parallel, a DNA microarray technology was used to analyse mutations associated with CQ, SP, and artemisinin derivative resistance among samples from the asymptomatic community. Mutation and haplotype frequencies were determined, as well as the multiplicity of infection. RESULTS: The in vivo study showed an efficacy of 88% for CQ+SP to treat P. falciparum infections. DNA microarray analyses indicated a low diversity in the parasite population with one major haplotype present in 98.7% of the cases. It was composed of fixed mutations at position 86 in pfmdr1, positions 72, 75, 76, 220, 326 and 356 in pfcrt, and positions 59 and 108 in pfdhfr. No mutation was observed in pfdhps or in pfATPase6. The mean multiplicity of infection was 1.39. CONCLUSION: This work provides the first insight into drug resistance markers of P. falciparum in the SI. The obtained results indicated the presence of a very homogenous P. falciparum population circulating in the community. Although CQ+SP could still clear most infections, seven fixed mutations associated with CQ resistance and two fixed mutations related to SP resistance were observed. Whether the absence of mutations in pfATPase6 indicates the efficacy of artemisinin derivatives remains to be proven.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

QUESTIONS UNDER STUDY: The diagnostic significance of clinical symptoms/signs of influenza has mainly been assessed in the context of controlled studies with stringent inclusion criteria. There was a need to extend the evaluation of these predictors not only in the context of general practice but also according to the duration of symptoms and to the dynamics of the epidemic. PRINCIPLES: A prospective study conducted in the Medical Outpatient Clinic in the winter season 1999-2000. Patients with influenza-like syndrome were included, as long as the primary care physician envisaged the diagnosis of influenza. The physician administered a questionnaire, a throat swab was performed and a culture acquired to document the diagnosis of influenza. RESULTS: 201 patients were included in the study. 52% were culture positive for influenza. By univariate analysis, temperature >37.8 degrees C (OR 4.2; 95% CI 2.3-7.7), duration of symptoms <48 hours (OR 3.2; 1.8-5.7), cough (OR 3.2; 1-10.4) and myalgia (OR 2.8; 1.0-7.5) were associated with a diagnosis of influenza. In a multivariable logistic analysis, the best model predicting influenza was the association of a duration of symptom <48 hours, medical attendance at the beginning of the epidemic (weeks 49-50), fever >37.8 and cough, with a sensitivity of 79%, specificity of 69%, positive predictive value of 67%, negative predictive value of 73% and an area under the ROC curve of 0.74. CONCLUSIONS: Besides relevant symptoms and signs, the physician should also consider the duration of symptoms and the epidemiological context (start, peak or end of the epidemic) in his appraisal, since both parameters considerably modify the value of the clinical predictors when assessing the probability of a patient having influenza.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Chapter 2 Bankruptcy Initiation In The New Era of Chapter 11 2.1 Abstract The bankruptcy act of 1978 placed corporate managers (as debtor in possession) in control of the bankruptcy process. Between 2000 and 2001 managers apparently lost this control to secured creditors. This study examines financial ratios of firms filing for bankruptcy between 1993 and 2004 and tests the hypothesis that the change from manager to creditor control created or exacerbated the managerial (and dominant creditor) incentive to delay bankruptcy filing. We find a clear deterioration in the financial conditions of firms filing after 2001. This is consistent with managers (or creditors who control them) delaying filing for bankruptcy. We also observe patterns of operating losses and liquidations that suggest adverse economic consequences from such delay. Chapter 3 Bankruptcy Resolution: Priority of Claims with the Secured Creditor in Control 3.1 Abstract We present new evidence on the violation of priority of claims in bankruptcy using a sample of 222 firms that tiled for Chapter 11 bankruptcy over the 1993-2004 period. Our study reveals a dramatic reduction in the violations of priority of claims compared to research on prior periods. These results are consistent with changes in both court practices and laws transferring power to the secured creditors over our sample period. We also find an increase in the time from the date of a bankruptcy filing to reaching plan confirmation where priority is not violated. Chapter 4 Bankruptcy Resolution: Speed, APR Violations and Delaware 4.1 Abstract We analyze speed of bankruptcy resolution on a sample of 294 US firms filing for bankruptcy in the 1993-2004 period. We find strong association between type of Chapter II filing and speed of bankruptcy resolution. We also find that violations to the absolute priority rule reduce the time from bankruptcy filing to plan confirmation. This is consistent with the hypothesis that creditors are willing to grant concessions in exchange for faster bankruptcy resolution. Furthermore, after controlling for the type of filing and violations to the absolute priority rule, we do not find any difference in the duration of the bankruptcy process for firms filing in Delaware, New York, or other bankruptcy districts. Chapter 5 Financial Distress and Corporate Control 5.1 Abstract We examine the replacement rates of directors and executives in 63 firms filing for bank ruptcy during the 1995-2002 period. We find that over 76% of directors and executives are replaced in the four year period from the year prior to the bankruptcy filing through three years after. These rates are higher than those found in prior research and is consistent with changes in bankruptcy procedures and practice (i.e. the increased secured creditors control over the process due to both DIP financing and changes in the Uniform Commercial Code) having a significant impact on the corporate governance of firms in financial distress. Chapter 6 Financial Statement Restatements: Decision to File for Bankruptcy 6.1 Abstract On a sample of 201 firms that restated their financial statements we analyze the process of regaining investor trust in a two year period after the restatement. We find that 20% of firms that restate their financial statements tile for bankruptcy or restructure out of court. Our results also indicate that the decisions to change auditor or management is correlated with a higher probability of failure. Increased media attention appears to partly explain the decision of firms to restructure their debt or tile for bankruptcy.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Valganciclovir and ganciclovir are widely used for the prevention of cytomegalovirus (CMV) infection in solid organ transplant recipients, with a major impact on patients' morbidity and mortality. Oral valganciclovir, the ester prodrug of ganciclovir, has been developed to enhance the oral bioavailability of ganciclovir. It crosses the gastrointestinal barrier through peptide transporters and is then hydrolysed into ganciclovir. This review aims to describe the current knowledge of the pharmacokinetic and pharmacodynamic characteristics of this agent, and to address the issue of therapeutic drug monitoring. Based on currently available literature, ganciclovir pharmacokinetics in adult solid organ transplant recipients receiving oral valganciclovir are characterized by bioavailability of 66 +/- 10% (mean +/- SD), a maximum plasma concentration of 3.1 +/- 0.8 mg/L after a dose of 450 mg and of 6.6 +/- 1.9 mg/L after a dose of 900 mg, a time to reach the maximum plasma concentration of 3.0 +/- 1.0 hours, area under the plasma concentration-time curve values of 29.1 +/- 5.3 mg.h/L and 51.9 +/- 18.3 mg.h/L (after 450 mg and 900 mg, respectively), apparent clearance of 12.4 +/- 3.8 L/h, an elimination half-life of 5.3 +/- 1.5 hours and an apparent terminal volume of distribution of 101 +/- 36 L. The apparent clearance is highly correlated with renal function, hence the dosage needs to be adjusted in proportion to the glomerular filtration rate. Unexplained interpatient variability is limited (18% in apparent clearance and 28% in the apparent central volume of distribution). There is no indication of erratic or limited absorption in given subgroups of patients; however, this may be of concern in patients with severe malabsorption. The in vitro pharmacodynamics of ganciclovir reveal a mean concentration producing 50% inhibition (IC(50)) among CMV clinical strains of 0.7 mg/L (range 0.2-1.9 mg/L). Systemic exposure of ganciclovir appears to be moderately correlated with clinical antiviral activity and haematotoxicity during CMV prophylaxis in high-risk transplant recipients. Low ganciclovir plasma concentrations have been associated with treatment failure and high concentrations with haematotoxicity and neurotoxicity, but no formal therapeutic or toxic ranges have been validated. The pharmacokinetic parameters of ganciclovir after valganciclovir administration (bioavailability, apparent clearance and volume of distribution) are fairly predictable in adult transplant patients, with little interpatient variability beyond the effect of renal function and bodyweight. Thus ganciclovir exposure can probably be controlled with sufficient accuracy by thorough valganciclovir dosage adjustment according to patient characteristics. In addition, the therapeutic margin of ganciclovir is loosely defined. The usefulness of systematic therapeutic drug monitoring in adult transplant patients therefore appears questionable; however, studies are still needed to extend knowledge to particular subgroups of patients or dosage regimens.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Background: Cocoa is rich in flavonoids, has anti-oxidative properties and increases the bioavailability of nitric oxide (NO). Adequate renal tissue oxygenation is crucial for the maintenance of renal function. The goal of this study was to investigate the effect of cocoa-rich dark chocolate (DC) on renal tissue oxygenation in humans, as compared to flavonoid-poor white chocolate (WC). Methods: Ten healthy volunteers with preserved kidney function (mean age ± SD 35 ± 12 years, 70% women, BMI 21 ± 3 kg/m2) underwent blood oxygenation level-dependent magnetic resonance imaging (BOLD-MRI) before and 2 hours after the ingestion of 1 g/kg of DC (70% cocoa). Renal tissue oxygenation was determined by the measurement of R2* maps on 4 coronal slices covering both kidneys. The mean R2* (= 1/T2*) values in the medulla and cortex were calculated, a low R2* indicating high tissue oxygenation. Eight participants also underwent BOLD-MRI at least 1 week later, before and 2 hours after the intake of 1 g/kg WC. Results: The mean medullary R2* was lower after DC intake compared to baseline (28.2 ± 1.3 s-1 vs. 29.6 ± 1.3 s-1, p = 0.04), whereas cortical and medullary R2* values did not change after WC intake. The change in medullary R2* correlated with the level of circulating (epi)catechines, metabolites of flavonoids (r = 0.74, p = 0.037), and was independent of plasma renin activity. Conclusion: This study suggests for the first time an increase of renal medullary oxygenation after intake of dark chocolate. Whether this is linked to flavonoid-induced changes in renal perfusion or oxygen consumption, and whether cocoa has potentially renoprotective properties, merits further study.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The motivation for this research initiated from the abrupt rise and fall of minicomputers which were initially used both for industrial automation and business applications due to their significantly lower cost than their predecessors, the mainframes. Later industrial automation developed its own vertically integrated hardware and software to address the application needs of uninterrupted operations, real-time control and resilience to harsh environmental conditions. This has led to the creation of an independent industry, namely industrial automation used in PLC, DCS, SCADA and robot control systems. This industry employs today over 200'000 people in a profitable slow clockspeed context in contrast to the two mainstream computing industries of information technology (IT) focused on business applications and telecommunications focused on communications networks and hand-held devices. Already in 1990s it was foreseen that IT and communication would merge into one Information and communication industry (ICT). The fundamental question of the thesis is: Could industrial automation leverage a common technology platform with the newly formed ICT industry? Computer systems dominated by complex instruction set computers (CISC) were challenged during 1990s with higher performance reduced instruction set computers (RISC). RISC started to evolve parallel to the constant advancement of Moore's law. These developments created the high performance and low energy consumption System-on-Chip architecture (SoC). Unlike to the CISC processors RISC processor architecture is a separate industry from the RISC chip manufacturing industry. It also has several hardware independent software platforms consisting of integrated operating system, development environment, user interface and application market which enables customers to have more choices due to hardware independent real time capable software applications. An architecture disruption merged and the smartphone and tablet market were formed with new rules and new key players in the ICT industry. Today there are more RISC computer systems running Linux (or other Unix variants) than any other computer system. The astonishing rise of SoC based technologies and related software platforms in smartphones created in unit terms the largest installed base ever seen in the history of computers and is now being further extended by tablets. An underlying additional element of this transition is the increasing role of open source technologies both in software and hardware. This has driven the microprocessor based personal computer industry with few dominating closed operating system platforms into a steep decline. A significant factor in this process has been the separation of processor architecture and processor chip production and operating systems and application development platforms merger into integrated software platforms with proprietary application markets. Furthermore the pay-by-click marketing has changed the way applications development is compensated: Three essays on major trends in a slow clockspeed industry: The case of industrial automation 2014 freeware, ad based or licensed - all at a lower price and used by a wider customer base than ever before. Moreover, the concept of software maintenance contract is very remote in the app world. However, as a slow clockspeed industry, industrial automation has remained intact during the disruptions based on SoC and related software platforms in the ICT industries. Industrial automation incumbents continue to supply systems based on vertically integrated systems consisting of proprietary software and proprietary mainly microprocessor based hardware. They enjoy admirable profitability levels on a very narrow customer base due to strong technology-enabled customer lock-in and customers' high risk leverage as their production is dependent on fault-free operation of the industrial automation systems. When will this balance of power be disrupted? The thesis suggests how industrial automation could join the mainstream ICT industry and create an information, communication and automation (ICAT) industry. Lately the Internet of Things (loT) and weightless networks, a new standard leveraging frequency channels earlier occupied by TV broadcasting, have gradually started to change the rigid world of Machine to Machine (M2M) interaction. It is foreseeable that enough momentum will be created that the industrial automation market will in due course face an architecture disruption empowered by these new trends. This thesis examines the current state of industrial automation subject to the competition between the incumbents firstly through a research on cost competitiveness efforts in captive outsourcing of engineering, research and development and secondly researching process re- engineering in the case of complex system global software support. Thirdly we investigate the industry actors', namely customers, incumbents and newcomers, views on the future direction of industrial automation and conclude with our assessments of the possible routes industrial automation could advance taking into account the looming rise of the Internet of Things (loT) and weightless networks. Industrial automation is an industry dominated by a handful of global players each of them focusing on maintaining their own proprietary solutions. The rise of de facto standards like IBM PC, Unix and Linux and SoC leveraged by IBM, Compaq, Dell, HP, ARM, Apple, Google, Samsung and others have created new markets of personal computers, smartphone and tablets and will eventually also impact industrial automation through game changing commoditization and related control point and business model changes. This trend will inevitably continue, but the transition to a commoditized industrial automation will not happen in the near future.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Diurnal release of the orexin neuropeptides orexin-A (Ox-A, hypocretin-1) and orexin-B (Ox-B, hypocretin-2) stabilises arousal, regulates energy homeostasis and contributes to cognition and learning. However, whether cellular correlates of brain plasticity are regulated through orexins, and whether they do so in a time-of-day-dependent manner, has never been assessed. Immunohistochemically we found sparse but widespread innervation of hippocampal subfields through Ox-A- and Ox-B-containing fibres in young adult rats. The actions of Ox-A were studied on NMDA receptor (NMDAR)-mediated excitatory synaptic transmission in acute hippocampal slices prepared around the trough (Zeitgeber time (ZT) 4-8, corresponding to 4-8 h into the resting phase) and peak (ZT 23) of intracerebroventricular orexin levels. At ZT 4-8, exogenous Ox-A (100 nm in bath) inhibited NMDA receptor-mediated excitatory postsynaptic currents (NMDA-EPSCs) at mossy fibre (MF)-CA3 (to 55.6 ± 6.8% of control, P = 0.0003) and at Schaffer collateral-CA1 synapses (70.8 ± 6.3%, P = 0.013), whereas it remained ineffective at non-MF excitatory synapses in CA3. Ox-A actions were mediated postsynaptically and blocked by the orexin-2 receptor (OX2R) antagonist JNJ10397049 (1 μm), but not by orexin-1 receptor inhibition (SB334867, 1 μm) or by adrenergic and cholinergic antagonists. At ZT 23, inhibitory effects of exogenous Ox-A were absent (97.6 ± 2.9%, P = 0.42), but reinstated (87.2 ± 3.3%, P = 0.002) when endogenous orexin signalling was attenuated for 5 h through i.p. injections of almorexant (100 mg kg(-1)), a dual orexin receptor antagonist. In conclusion, endogenous orexins modulate hippocampal NMDAR function in a time-of-day-dependent manner, suggesting that they may influence cellular plasticity and consequent variations in memory performance across the sleep-wake cycle.