995 resultados para attribute-level performances
Resumo:
BACKGROUND: The goal of this paper is to investigate the respective influence of work characteristics, the effort-reward ratio, and overcommitment on the poor mental health of out-of-hospital care providers. METHODS: 333 out-of-hospital care providers answered a questionnaire that included queries on mental health (GHQ-12), demographics, health-related information and work characteristics, questions from the Effort-Reward Imbalance Questionnaire, and items about overcommitment. A two-level multiple regression was performed between mental health (the dependent variable) and the effort-reward ratio, the overcommitment score, weekly number of interventions, percentage of non-prehospital transport of patients out of total missions, gender, and age. Participants were first-level units, and ambulance services were second-level units. We also shadowed ambulance personnel for a total of 416 hr. RESULTS: With cutoff points of 2/3 and 3/4 positive answers on the GHQ-12, the percentages of potential cases with poor mental health were 20% and 15%, respectively. The effort-reward ratio was associated with poor mental health (P < 0.001), irrespective of age or gender. Overcommitment was associated with poor mental health; this association was stronger in women (β = 0.054) than in men (β = 0.020). The percentage of prehospital missions out of total missions was only associated with poor mental health at the individual level. CONCLUSIONS: Emergency medical services should pay attention to the way employees perceive their efforts and the rewarding aspects of their work: an imbalance of those aspects is associated with poor mental health. Low perceived esteem appeared particularly associated with poor mental health. This suggests that supervisors of emergency medical services should enhance the value of their employees' work. Employees with overcommitment should also receive appropriate consideration. Preventive measures should target individual perceptions of effort and reward in order to improve mental health in prehospital care providers.
Resumo:
We study the spectrum and magnetic properties of double quantum dots in the lowest Landau level for different values of the hopping and Zeeman parameters by means of exact diagonalization techniques in systems of N=6 and 7 electrons and a filling factor close to 2. We compare our results with those obtained in double quantum layers and single quantum dots. The Kohn theorem is also discussed.
Resumo:
Soils of the coastal plains of Rio Grande do Sul, Brazil, are affected by salinization, which can hamper the establishment and development of crops in general, including rice. The application of high doses of KCl may aggravate the crop damage, due to the high saline content of this fertilizer. This study aimed to evaluate the effect of K fertilizer management on some properties of rice plant, grown in soils with different sodicity levels, and determine which attribute is best related to yield. The field study was conducted in four Albaqualfs with exchangeable Na percentages of 5.6, 9.0, 21 and 32 %. The management of KCl fertilizer consisted of the application of 90 kg ha-1 K2O broadcast, 90 kg ha-1 K2O in the row and 45 kg ha-1 K2O in the row + 45 kg ha-1 K2O at panicle initiation (PI). Plant density, dry matter evolution, height, SPAD (Soil Plant Analysis Development value indicating relative chlorophyll contents) index, tiller mass, 1,000-grain weight, panicle length and grain yield were evaluated. The plant density was damaged by application of K fertilizer in the row, especially at full dose (90 kg ha-1), at three sodicity levels, resulting in loss in biomass accumulation in later stages, affecting the crop yield, even at the lowest level of soil sodicity (5.6 %). All properties were correlated with yield; the highest positive correlation was found with plant density and shoot dry matter at full flowering, and a negative correlation with panicle length.
Resumo:
Recently a new Bell inequality has been introduced by Collins et al. [Phys. Rev. Lett. 88, 040404 (2002)], which is strongly resistant to noise for maximally entangled states of two d-dimensional quantum systems. We prove that a larger violation, or equivalently a stronger resistance to noise, is found for a nonmaximally entangled state. It is shown that the resistance to noise is not a good measure of nonlocality and we introduce some other possible measures. The nonmaximally entangled state turns out to be more robust also for these alternative measures. From these results it follows that two von Neumann measurements per party may be not optimal for detecting nonlocality. For d=3,4, we point out some connections between this inequality and distillability. Indeed, we demonstrate that any state violating it, with the optimal von Neumann settings, is distillable.
Resumo:
Résumé La cryptographie classique est basée sur des concepts mathématiques dont la sécurité dépend de la complexité du calcul de l'inverse des fonctions. Ce type de chiffrement est à la merci de la puissance de calcul des ordinateurs ainsi que la découverte d'algorithme permettant le calcul des inverses de certaines fonctions mathématiques en un temps «raisonnable ». L'utilisation d'un procédé dont la sécurité est scientifiquement prouvée s'avère donc indispensable surtout les échanges critiques (systèmes bancaires, gouvernements,...). La cryptographie quantique répond à ce besoin. En effet, sa sécurité est basée sur des lois de la physique quantique lui assurant un fonctionnement inconditionnellement sécurisé. Toutefois, l'application et l'intégration de la cryptographie quantique sont un souci pour les développeurs de ce type de solution. Cette thèse justifie la nécessité de l'utilisation de la cryptographie quantique. Elle montre que le coût engendré par le déploiement de cette solution est justifié. Elle propose un mécanisme simple et réalisable d'intégration de la cryptographie quantique dans des protocoles de communication largement utilisés comme les protocoles PPP, IPSec et le protocole 802.1li. Des scénarios d'application illustrent la faisabilité de ces solutions. Une méthodologie d'évaluation, selon les critères communs, des solutions basées sur la cryptographie quantique est également proposée dans ce document. Abstract Classical cryptography is based on mathematical functions. The robustness of a cryptosystem essentially depends on the difficulty of computing the inverse of its one-way function. There is no mathematical proof that establishes whether it is impossible to find the inverse of a given one-way function. Therefore, it is mandatory to use a cryptosystem whose security is scientifically proven (especially for banking, governments, etc.). On the other hand, the security of quantum cryptography can be formally demonstrated. In fact, its security is based on the laws of physics that assure the unconditional security. How is it possible to use and integrate quantum cryptography into existing solutions? This thesis proposes a method to integrate quantum cryptography into existing communication protocols like PPP, IPSec and the 802.l1i protocol. It sketches out some possible scenarios in order to prove the feasibility and to estimate the cost of such scenarios. Directives and checkpoints are given to help in certifying quantum cryptography solutions according to Common Criteria.
Resumo:
We propose a method to obtain a single centered correlation with use of a joint transform correlator. We analyze the required setup to carry out the whole process optically, and we also present experimental results.
Resumo:
It is possible to improve the fringe binarization method of joint transform correlation by choosing a suitable threshold level.
Resumo:
Background. Toll-like receptors (TLR) recognize a variety of ligands, including pathogen-associated molecular patterns and link innate and adaptive immunity. Individual receptors can be up-regulated during infection and inflammation. We examined the expression of selected TLRs at the protein level in various types of renal disease.Methods. Frozen sections of renal biopsies were stained with monoclonal antibodies to TLR-2, -4 and -9.Results. Up-regulation of the three TLRs studied was seen, although the extent was modest. TLR-2- and -4-positive cells belonged to the population of infiltrating inflammatory cells; only in the case of TLR-9 were intrinsic glomerular cells positive in polyoma virus infection and haemolytic uraemic syndrome (HUS).Conclusions. Evidence for the involvement of the three TLRs tested in a variety of human renal diseases was found. These findings add to our understanding of the role of the innate immune system in kidney disease.
Resumo:
Par le biais d'une procédure Test-Retest, la stabilité à long terme des indices standard du WISC-IV français a été évaluée. L'intervalle moyen entre les deux passations est de 2,33 ans. L'échantillon comprend 96 enfants « tout venant » âgés de huit à 12 ans. Les comparaisons entre les moyennes des deux passations ne montrent pas de différence significative pour indice de compréhension verbale (ICV), indice de raisonnement perceptif (IRP), indice de mémoire de travail (IMT), indice d'aptitude générale (IAG) et QIT. Au niveau interindividuel, les coefficients de corrélations témoignent d'une bonne stabilité à long terme pour ICV, IAG et QIT (allant de 0,81 à 0,82). Une analyse des différences de performances entre les deux passations indique une stabilité intra-individuelle satisfaisante pour IMT et IAG. Seul IAG présente donc une stabilité à long terme satisfaisante au niveau inter- et intra-individuel. By a Test-Retest procedure, this study explores the long-term stability of the French WISC-IV index scores. The average Test-Retest interval was 2.33 years. The sample consisted of 96 non-clinical children aged between 8 and 12 years. Mean difference between the two testings was not statistically significant for VCI, PRI, WMI, GAI and FSIQ. Test-Retest reliability correlations between the two assessments are high for VCI, GAI and FSIQ (ranging from .81 to .82). An analysis of the performance differences between two assessments indicates intra-individual stability for WMI and GAI. In sum, only GAI demonstrates reliable long-term stability at an inter-and intra-individual level.
Resumo:
Résumé Suite aux recentes avancées technologiques, les archives d'images digitales ont connu une croissance qualitative et quantitative sans précédent. Malgré les énormes possibilités qu'elles offrent, ces avancées posent de nouvelles questions quant au traitement des masses de données saisies. Cette question est à la base de cette Thèse: les problèmes de traitement d'information digitale à très haute résolution spatiale et/ou spectrale y sont considérés en recourant à des approches d'apprentissage statistique, les méthodes à noyau. Cette Thèse étudie des problèmes de classification d'images, c'est à dire de catégorisation de pixels en un nombre réduit de classes refletant les propriétés spectrales et contextuelles des objets qu'elles représentent. L'accent est mis sur l'efficience des algorithmes, ainsi que sur leur simplicité, de manière à augmenter leur potentiel d'implementation pour les utilisateurs. De plus, le défi de cette Thèse est de rester proche des problèmes concrets des utilisateurs d'images satellite sans pour autant perdre de vue l'intéret des méthodes proposées pour le milieu du machine learning dont elles sont issues. En ce sens, ce travail joue la carte de la transdisciplinarité en maintenant un lien fort entre les deux sciences dans tous les développements proposés. Quatre modèles sont proposés: le premier répond au problème de la haute dimensionalité et de la redondance des données par un modèle optimisant les performances en classification en s'adaptant aux particularités de l'image. Ceci est rendu possible par un système de ranking des variables (les bandes) qui est optimisé en même temps que le modèle de base: ce faisant, seules les variables importantes pour résoudre le problème sont utilisées par le classifieur. Le manque d'information étiquétée et l'incertitude quant à sa pertinence pour le problème sont à la source des deux modèles suivants, basés respectivement sur l'apprentissage actif et les méthodes semi-supervisées: le premier permet d'améliorer la qualité d'un ensemble d'entraînement par interaction directe entre l'utilisateur et la machine, alors que le deuxième utilise les pixels non étiquetés pour améliorer la description des données disponibles et la robustesse du modèle. Enfin, le dernier modèle proposé considère la question plus théorique de la structure entre les outputs: l'intègration de cette source d'information, jusqu'à présent jamais considérée en télédétection, ouvre des nouveaux défis de recherche. Advanced kernel methods for remote sensing image classification Devis Tuia Institut de Géomatique et d'Analyse du Risque September 2009 Abstract The technical developments in recent years have brought the quantity and quality of digital information to an unprecedented level, as enormous archives of satellite images are available to the users. However, even if these advances open more and more possibilities in the use of digital imagery, they also rise several problems of storage and treatment. The latter is considered in this Thesis: the processing of very high spatial and spectral resolution images is treated with approaches based on data-driven algorithms relying on kernel methods. In particular, the problem of image classification, i.e. the categorization of the image's pixels into a reduced number of classes reflecting spectral and contextual properties, is studied through the different models presented. The accent is put on algorithmic efficiency and the simplicity of the approaches proposed, to avoid too complex models that would not be used by users. The major challenge of the Thesis is to remain close to concrete remote sensing problems, without losing the methodological interest from the machine learning viewpoint: in this sense, this work aims at building a bridge between the machine learning and remote sensing communities and all the models proposed have been developed keeping in mind the need for such a synergy. Four models are proposed: first, an adaptive model learning the relevant image features has been proposed to solve the problem of high dimensionality and collinearity of the image features. This model provides automatically an accurate classifier and a ranking of the relevance of the single features. The scarcity and unreliability of labeled. information were the common root of the second and third models proposed: when confronted to such problems, the user can either construct the labeled set iteratively by direct interaction with the machine or use the unlabeled data to increase robustness and quality of the description of data. Both solutions have been explored resulting into two methodological contributions, based respectively on active learning and semisupervised learning. Finally, the more theoretical issue of structured outputs has been considered in the last model, which, by integrating outputs similarity into a model, opens new challenges and opportunities for remote sensing image processing.
Resumo:
Delta(9)-Tetrahydrocannabinol (THC) is frequently found in the blood of drivers suspected of driving under the influence of cannabis or involved in traffic crashes. The present study used a double-blind crossover design to compare the effects of medium (16.5 mg THC) and high doses (45.7 mg THC) of hemp milk decoctions or of a medium dose of dronabinol (20 mg synthetic THC, Marinol on several skills required for safe driving. Forensic interpretation of cannabinoids blood concentrations were attempted using the models proposed by Daldrup (cannabis influencing factor or CIF) and Huestis and coworkers. First, the time concentration-profiles of THC, 11-hydroxy-Delta(9)-tetrahydrocannabinol (11-OH-THC) (active metabolite of THC), and 11-nor-9-carboxy-Delta(9)-tetrahydrocannabinol (THCCOOH) in whole blood were determined by gas chromatography-mass spectrometry-negative ion chemical ionization. Compared to smoking studies, relatively low concentrations were measured in blood. The highest mean THC concentration (8.4 ng/mL) was achieved 1 h after ingestion of the strongest decoction. Mean maximum 11-OH-THC level (12.3 ng/mL) slightly exceeded that of THC. THCCOOH reached its highest mean concentration (66.2 ng/mL) 2.5-5.5 h after intake. Individual blood levels showed considerable intersubject variability. The willingness to drive was influenced by the importance of the requested task. Under significant cannabinoids influence, the participants refused to drive when they were asked whether they would agree to accomplish several unimportant tasks, (e.g., driving a friend to a party). Most of the participants reported a significant feeling of intoxication and did not appreciate the effects, notably those felt after drinking the strongest decoction. Road sign and tracking testing revealed obvious and statistically significant differences between placebo and treatments. A marked impairment was detected after ingestion of the strongest decoction. A CIF value, which relies on the molar ratio of main active to inactive cannabinoids, greater than 10 was found to correlate with a strong feeling of intoxication. It also matched with a significant decrease in the willingness to drive, and it matched also with a significant impairment in tracking performances. The mathematic model II proposed by Huestis et al. (1992) provided at best a rough estimate of the time of oral administration with 27% of actual values being out of range of the 95% confidence interval. The sum of THC and 11-OH-THC blood concentrations provided a better estimate of impairment than THC alone. This controlled clinical study points out the negative influence on fitness to drive after medium or high dose oral THC or dronabinol.