931 resultados para many-objective problems
Resumo:
Systemic Lupus Erythematosus (SLE) is a chronic autoimmune disease, rare, multisystem, with a very heterogeneous clinical and serological manifestations standard. The patient, in addition to suffering injuries on his physical and physiological functioning, may also face a number of psychosocial problems. Research indicates that SLE can cause significant damage to the psychological realm, especially with the presence of anxiety and depression. In 1999, the American College of Rheumatology (ACR), proposed the establishment of 19 neuropsychiatric clinical syndromes attributed to SLE. Depression lies between mood disorders and is one of the most common psychiatric manifestations in this group, being found more frequently in these patients than in the general population. Studies also suggest that social support plays an important role in the development of coping strategies, in SLE management and depression. This study has as main objective verify the association between depressive symptoms and perceived social support in patients with SLE. The specific objectives turned to: investigte the prevalence of depressive symptoms; investigate the perceived social support and verify if there is an association between depression, social support and sociodemographic variables. We used a sociodemographic questionnaire, the Beck Depression Scale, and the Perceived Social Support Scale. The analysis was performed through descriptive and inferential statistics. The final sample could count with 79 SLE women, with an average age of 35.7 years. 44 (55.7%) of the participants were married. Only 6 (7.59%) had completed higher education and 32 (40.51%) have not finished high school. Seventy-one (89.87%) had an income below three minimum salaries and 71 (89.87) practiced a religion, and the Catholic (67.71%) was the most mentioned by them. Of the total sample, 37 (46.74%) had been diagnosed SLE more than 7 years before, and 25 (31.65%) had the disease for more than 10 years. Only 19 (24.05%) had some work activity. Forty-two of them (53.17%) had depressive symptoms levels from mild to severe, and 51 (64.46%) reported pain levels of 5, or above. The study found a significant association between depressive symptoms and pain (p = 0.013) and depressive symptoms and work activity (p = 0.02). When we examined the perception of social support, the results showed high levels among participants. Using the Spearman correlation test we found a strong correlation between depressive symptoms and social support (p= 0,000037). It means that the higher the frequency of support, the lower the score of depression. These findings are relevant because depressive symptoms in patients with SLE have a multicausal and multifactorial character and may remain unnoticed, since many of them are confused with the manifestations of the disease. This fact requires a careful assessment from professionals, not only in the clinical setting, but also considering other psychosocial reasons, that may be influencing the emergence or worsening of symptoms. These results also corroborate other studies, which not only confirm the predictive role of social support in the physical wellbeing, but also in the psychological.
Resumo:
Systemic Lupus Erythematosus (SLE) is a chronic autoimmune disease, rare, multisystem, with a very heterogeneous clinical and serological manifestations standard. The patient, in addition to suffering injuries on his physical and physiological functioning, may also face a number of psychosocial problems. Research indicates that SLE can cause significant damage to the psychological realm, especially with the presence of anxiety and depression. In 1999, the American College of Rheumatology (ACR), proposed the establishment of 19 neuropsychiatric clinical syndromes attributed to SLE. Depression lies between mood disorders and is one of the most common psychiatric manifestations in this group, being found more frequently in these patients than in the general population. Studies also suggest that social support plays an important role in the development of coping strategies, in SLE management and depression. This study has as main objective verify the association between depressive symptoms and perceived social support in patients with SLE. The specific objectives turned to: investigte the prevalence of depressive symptoms; investigate the perceived social support and verify if there is an association between depression, social support and sociodemographic variables. We used a sociodemographic questionnaire, the Beck Depression Scale, and the Perceived Social Support Scale. The analysis was performed through descriptive and inferential statistics. The final sample could count with 79 SLE women, with an average age of 35.7 years. 44 (55.7%) of the participants were married. Only 6 (7.59%) had completed higher education and 32 (40.51%) have not finished high school. Seventy-one (89.87%) had an income below three minimum salaries and 71 (89.87) practiced a religion, and the Catholic (67.71%) was the most mentioned by them. Of the total sample, 37 (46.74%) had been diagnosed SLE more than 7 years before, and 25 (31.65%) had the disease for more than 10 years. Only 19 (24.05%) had some work activity. Forty-two of them (53.17%) had depressive symptoms levels from mild to severe, and 51 (64.46%) reported pain levels of 5, or above. The study found a significant association between depressive symptoms and pain (p = 0.013) and depressive symptoms and work activity (p = 0.02). When we examined the perception of social support, the results showed high levels among participants. Using the Spearman correlation test we found a strong correlation between depressive symptoms and social support (p= 0,000037). It means that the higher the frequency of support, the lower the score of depression. These findings are relevant because depressive symptoms in patients with SLE have a multicausal and multifactorial character and may remain unnoticed, since many of them are confused with the manifestations of the disease. This fact requires a careful assessment from professionals, not only in the clinical setting, but also considering other psychosocial reasons, that may be influencing the emergence or worsening of symptoms. These results also corroborate other studies, which not only confirm the predictive role of social support in the physical wellbeing, but also in the psychological.
Resumo:
It is remarkable the current planet’s situation of degradation and modification of natural assets and the considerable loss of the recovery power inherent to the ecosystems. Concomitant with this, all communities and species are suffering the consequences of these changes without planning. The creation of conservation units (UCs) through the National System of Conservation Units (SNUC) was a concrete action on the deliberateness of halting these processes, which, on the other hand, generated socio-environmental, geo-economical and cultural-political conflict of interests between traditional communities in the vicinity of these units, institutions, governmental entities and society in general. The country’s National Program of Environmental Education (ProNEA) provides the integration of the communities and UCs’ managers in a co-participative administration to solve these conflicts. The principles of Environmental Education (EA) leads the methodology found to change the socio-educational paradigms of traditional teaching, still existing in our society and intrinsically related to environmental problems, which are contrary to the dialogic pedagogy from Paulo Freire, that valorize popular knowledge, pro-active citizenship, as well as contrary to Ecopedagogy, that re-integrate human being on its natural environment, the Earth. One of the tools for starting environmental sensitization is the diagnosis by environmental perception of individuals. In this context, the objective of our work was to identify the environmental perception of Tenda do Moreno community located nearby Pau Furado State Park (PEPF) in Uberlândia – MG. To reach this objective, the research sought, in a first moment, to evaluate the environmental perception of residents of this community through semi-structured interviews applied in their homes and, in a second moment, we evaluated the environmental perception of community’ school students and made Environmental Education intervention activities with the intention to make children aware of the importance of conservation and function of PEPF. Using the Content analysis methodology, we found in nearly 60% of the 118 residents a systemic perception of nature, while approximately 32% expressed an anthropocentric perception. Mixed perceptions were found in 21%. A considerable part of the residents (47 individuals) indicated not knowing the park, although many of them recognize its importance. Among the 46 interviewed students, half expressed an anthropocentric perception of nature, while almost 36% had a systemic view. Seventeen children said they did not know the park and almost half of the students recognize some aspect of the importance of its existence. During the intervention activities, we had huge participation and dedication of students, beyond the massive expression of their personal views and daily experiences. In relation to the ten students that subjected the second evaluation about their environmental perception after the intervention, 80% showed systemic perception and emphasized the importance of conservation and of park. We believe that the continuity of the intervention activities could generate positive perspectives of socio-environmental effective changes in the daily school. Activities lead by Ecopedagogy and that encourage the citizen leadership in the young students are fundamental, while in the community, closer ties and dialog by UC’s managers would be important elements to generate effective change.
Resumo:
Many modern applications fall into the category of "large-scale" statistical problems, in which both the number of observations n and the number of features or parameters p may be large. Many existing methods focus on point estimation, despite the continued relevance of uncertainty quantification in the sciences, where the number of parameters to estimate often exceeds the sample size, despite huge increases in the value of n typically seen in many fields. Thus, the tendency in some areas of industry to dispense with traditional statistical analysis on the basis that "n=all" is of little relevance outside of certain narrow applications. The main result of the Big Data revolution in most fields has instead been to make computation much harder without reducing the importance of uncertainty quantification. Bayesian methods excel at uncertainty quantification, but often scale poorly relative to alternatives. This conflict between the statistical advantages of Bayesian procedures and their substantial computational disadvantages is perhaps the greatest challenge facing modern Bayesian statistics, and is the primary motivation for the work presented here.
Two general strategies for scaling Bayesian inference are considered. The first is the development of methods that lend themselves to faster computation, and the second is design and characterization of computational algorithms that scale better in n or p. In the first instance, the focus is on joint inference outside of the standard problem of multivariate continuous data that has been a major focus of previous theoretical work in this area. In the second area, we pursue strategies for improving the speed of Markov chain Monte Carlo algorithms, and characterizing their performance in large-scale settings. Throughout, the focus is on rigorous theoretical evaluation combined with empirical demonstrations of performance and concordance with the theory.
One topic we consider is modeling the joint distribution of multivariate categorical data, often summarized in a contingency table. Contingency table analysis routinely relies on log-linear models, with latent structure analysis providing a common alternative. Latent structure models lead to a reduced rank tensor factorization of the probability mass function for multivariate categorical data, while log-linear models achieve dimensionality reduction through sparsity. Little is known about the relationship between these notions of dimensionality reduction in the two paradigms. In Chapter 2, we derive several results relating the support of a log-linear model to nonnegative ranks of the associated probability tensor. Motivated by these findings, we propose a new collapsed Tucker class of tensor decompositions, which bridge existing PARAFAC and Tucker decompositions, providing a more flexible framework for parsimoniously characterizing multivariate categorical data. Taking a Bayesian approach to inference, we illustrate empirical advantages of the new decompositions.
Latent class models for the joint distribution of multivariate categorical, such as the PARAFAC decomposition, data play an important role in the analysis of population structure. In this context, the number of latent classes is interpreted as the number of genetically distinct subpopulations of an organism, an important factor in the analysis of evolutionary processes and conservation status. Existing methods focus on point estimates of the number of subpopulations, and lack robust uncertainty quantification. Moreover, whether the number of latent classes in these models is even an identified parameter is an open question. In Chapter 3, we show that when the model is properly specified, the correct number of subpopulations can be recovered almost surely. We then propose an alternative method for estimating the number of latent subpopulations that provides good quantification of uncertainty, and provide a simple procedure for verifying that the proposed method is consistent for the number of subpopulations. The performance of the model in estimating the number of subpopulations and other common population structure inference problems is assessed in simulations and a real data application.
In contingency table analysis, sparse data is frequently encountered for even modest numbers of variables, resulting in non-existence of maximum likelihood estimates. A common solution is to obtain regularized estimates of the parameters of a log-linear model. Bayesian methods provide a coherent approach to regularization, but are often computationally intensive. Conjugate priors ease computational demands, but the conjugate Diaconis--Ylvisaker priors for the parameters of log-linear models do not give rise to closed form credible regions, complicating posterior inference. In Chapter 4 we derive the optimal Gaussian approximation to the posterior for log-linear models with Diaconis--Ylvisaker priors, and provide convergence rate and finite-sample bounds for the Kullback-Leibler divergence between the exact posterior and the optimal Gaussian approximation. We demonstrate empirically in simulations and a real data application that the approximation is highly accurate, even in relatively small samples. The proposed approximation provides a computationally scalable and principled approach to regularized estimation and approximate Bayesian inference for log-linear models.
Another challenging and somewhat non-standard joint modeling problem is inference on tail dependence in stochastic processes. In applications where extreme dependence is of interest, data are almost always time-indexed. Existing methods for inference and modeling in this setting often cluster extreme events or choose window sizes with the goal of preserving temporal information. In Chapter 5, we propose an alternative paradigm for inference on tail dependence in stochastic processes with arbitrary temporal dependence structure in the extremes, based on the idea that the information on strength of tail dependence and the temporal structure in this dependence are both encoded in waiting times between exceedances of high thresholds. We construct a class of time-indexed stochastic processes with tail dependence obtained by endowing the support points in de Haan's spectral representation of max-stable processes with velocities and lifetimes. We extend Smith's model to these max-stable velocity processes and obtain the distribution of waiting times between extreme events at multiple locations. Motivated by this result, a new definition of tail dependence is proposed that is a function of the distribution of waiting times between threshold exceedances, and an inferential framework is constructed for estimating the strength of extremal dependence and quantifying uncertainty in this paradigm. The method is applied to climatological, financial, and electrophysiology data.
The remainder of this thesis focuses on posterior computation by Markov chain Monte Carlo. The Markov Chain Monte Carlo method is the dominant paradigm for posterior computation in Bayesian analysis. It has long been common to control computation time by making approximations to the Markov transition kernel. Comparatively little attention has been paid to convergence and estimation error in these approximating Markov Chains. In Chapter 6, we propose a framework for assessing when to use approximations in MCMC algorithms, and how much error in the transition kernel should be tolerated to obtain optimal estimation performance with respect to a specified loss function and computational budget. The results require only ergodicity of the exact kernel and control of the kernel approximation accuracy. The theoretical framework is applied to approximations based on random subsets of data, low-rank approximations of Gaussian processes, and a novel approximating Markov chain for discrete mixture models.
Data augmentation Gibbs samplers are arguably the most popular class of algorithm for approximately sampling from the posterior distribution for the parameters of generalized linear models. The truncated Normal and Polya-Gamma data augmentation samplers are standard examples for probit and logit links, respectively. Motivated by an important problem in quantitative advertising, in Chapter 7 we consider the application of these algorithms to modeling rare events. We show that when the sample size is large but the observed number of successes is small, these data augmentation samplers mix very slowly, with a spectral gap that converges to zero at a rate at least proportional to the reciprocal of the square root of the sample size up to a log factor. In simulation studies, moderate sample sizes result in high autocorrelations and small effective sample sizes. Similar empirical results are observed for related data augmentation samplers for multinomial logit and probit models. When applied to a real quantitative advertising dataset, the data augmentation samplers mix very poorly. Conversely, Hamiltonian Monte Carlo and a type of independence chain Metropolis algorithm show good mixing on the same dataset.
Resumo:
De nombreux problèmes liés aux domaines du transport, des télécommunications et de la logistique peuvent être modélisés comme des problèmes de conception de réseaux. Le problème classique consiste à transporter un flot (données, personnes, produits, etc.) sur un réseau sous un certain nombre de contraintes dans le but de satisfaire la demande, tout en minimisant les coûts. Dans ce mémoire, on se propose d'étudier le problème de conception de réseaux avec coûts fixes, capacités et un seul produit, qu'on transforme en un problème équivalent à plusieurs produits de façon à améliorer la valeur de la borne inférieure provenant de la relaxation continue du modèle. La méthode que nous présentons pour la résolution de ce problème est une méthode exacte de branch-and-price-and-cut avec une condition d'arrêt, dans laquelle nous exploitons à la fois la méthode de génération de colonnes, la méthode de génération de coupes et l'algorithme de branch-and-bound. Ces méthodes figurent parmi les techniques les plus utilisées en programmation linéaire en nombres entiers. Nous testons notre méthode sur deux groupes d'instances de tailles différentes (gran-des et très grandes), et nous la comparons avec les résultats donnés par CPLEX, un des meilleurs logiciels permettant de résoudre des problèmes d'optimisation mathématique, ainsi qu’avec une méthode de branch-and-cut. Il s'est avéré que notre méthode est prometteuse et peut donner de bons résultats, en particulier pour les instances de très grandes tailles.
Resumo:
Inverse simulations of musculoskeletal models computes the internal forces such as muscle and joint reaction forces, which are hard to measure, using the more easily measured motion and external forces as input data. Because of the difficulties of measuring muscle forces and joint reactions, simulations are hard to validate. One way of reducing errors for the simulations is to ensure that the mathematical problem is well-posed. This paper presents a study of regularity aspects for an inverse simulation method, often called forward dynamics or dynamical optimization, that takes into account both measurement errors and muscle dynamics. The simulation method is explained in detail. Regularity is examined for a test problem around the optimum using the approximated quadratic problem. The results shows improved rank by including a regularization term in the objective that handles the mechanical over-determinancy. Using the 3-element Hill muscle model the chosen regularization term is the norm of the activation. To make the problem full-rank only the excitation bounds should be included in the constraints. However, this results in small negative values of the activation which indicates that muscles are pushing and not pulling. Despite this unrealistic behavior the error maybe small enough to be accepted for specific applications. These results is a starting point start for achieving better results of inverse musculoskeletal simulations from a numerical point of view.
Réception de l’Histoire des colonies grecques dans la littérature coloniale des XVIIe-XVIIIe siècles
Resumo:
L’objectif de ce travail était d’apporter une réflexion sur les influences du colonialisme européen aux XVII-XVIIIe siècles. Ayant déjà rédigé mon mémoire de fin de maîtrise sur la Corinthe archaïque et ses colonies, j’ai souhaité approfondir la question, en choisissant de situer la problématique dans un contexte historiographique plus large, dans le temps comme dans l’espace. Plusieurs auteurs se sont intéressés aux réceptions de l’Antiquité à des périodes spécifiques (Grell et Alexandre le Grand en France, Richard et les influences antiques de la Révolution américaine,…). Cependant, aucune analyse sur le long terme n’avait encore été fournie, pas davantage qu’une réflexion de fonds sur la place de l’Antiquité dans la manière de penser les colonies en Europe moderne. Cet état de fait, de même que la relative rareté des sources modernes traitant des colonies grecques, m’ont obligé à élargir au maximum le champ de recherche, en y incluant des auteurs qui, s’ils ne se préoccupèrent pas de colonisation, recoururent néanmoins au précédent grec pour illustrer des problématiques de leur temps. Toutefois, il est possible de constater à quel point les répertoires historiographiques concernant la Grèce antique et ses colonisations se sont développés dans le courant de ces deux siècles qui virent l’apogée et la chute des premiers empires coloniaux européens en Amérique du nord. Si la comparaison à l’Histoire grecque ne relevait souvent que du Topos et de la propagande (comme dans le cas de la comparaison du Grand Condé ou de Louis XIV à Alexandre le Grand), son utilisation dans le cadre de controverses à plus large échelle outrepassait aussi le seul lieu commun pour s’inscrire dans un discours rhétorique plus approfondi. Le choix de la colonisation grecque comme modèle de comparaison s’imposait d’autant plus logiquement que les divers auteurs, depuis les premiers colons jusqu’aux pères fondateurs américains, insistaient sur les mérites économiques des colonies européennes. D’autres régimes, comme l’empire espagnol au XVIe siècle ou l’empire britannique au XIXe siècle, ont davantage recouru à une terminologie d’inspiration romaine. En effet, leur politique se fondait plus sur l’idée d’une extension impérialiste de l’État que sur une vision commerciale du colonialisme. L’article de Krishan Kumar demeure l’un des plus importants sur la question. La réception de l’Histoire des colonies grecques aux Temps modernes fut avant tout le fruit d’une tentative de définition du colonialisme comme phénomène global, et d’une volonté de situer les nations européennes dans un contexte remontant aux origines de l’Occident. À l’heure où l’Europe amorçait sa domination sur la totalité de la planète, et où la course à la colonisation s’accélérait, la majorité des auteurs s’abritaient derrière l’image de thalassocraties antiques qui, si elles ne dénotaient pas un pouvoir politique centralisé, n’en contribuèrent pas moins à imposer la culture fondatrice de la pensée occidentale à tout le bassin méditerranéen. Quant aux guerres qui poussèrent les puissances antiques les unes contre les autres, elles ne faisaient qu’augurer des conflits à large échelle que furent les guerres franco-britanniques du XVIIIe siècle.
Resumo:
Thesis (Ph.D.)--University of Washington, 2016-08
Resumo:
Thesis (Ph.D.)--University of Washington, 2016-06
Resumo:
Inverse heat conduction problems (IHCPs) appear in many important scientific and technological fields. Hence analysis, design, implementation and testing of inverse algorithms are also of great scientific and technological interest. The numerical simulation of 2-D and –D inverse (or even direct) problems involves a considerable amount of computation. Therefore, the investigation and exploitation of parallel properties of such algorithms are equally becoming very important. Domain decomposition (DD) methods are widely used to solve large scale engineering problems and to exploit their inherent ability for the solution of such problems.
Resumo:
La scoliose est la pathologie déformante du rachis la plus courante de l’adolescence. Dans 80 % des cas, elle est idiopathique, signifiant qu’aucune cause n’a été associée. Les scolioses idiopathiques répondent à un modèle multifactoriel incluant des facteurs génétiques, environnementaux, neurologiques, hormonaux, biomécaniques et de croissance squelettique. Comme hypothèse neurologique, une anomalie vestibulaire provoquerait une asymétrie d’activation des voies vestibulospinales et des muscles paravertébraux commandés par cette voie, engendrant la déformation scoliotique. Certains modèles animaux permettent de reproduire ce mécanisme. De plus, des anomalies liées au système vestibulaire, comme des troubles de l’équilibre, sont observées chez les patients avec une scoliose. La stimulation vestibulaire galvanique permet d’explorer le contrôle sensorimoteur de l’équilibre puisqu’elle permet d’altérer les afférences vestibulaires. L’objectif de cette thèse est d’explorer le contrôle sensorimoteur en évaluant la réaction posturale provoquée par cette stimulation chez les patients et les participants contrôle. Dans la première étude, les patients sont plus déstabilisés que les contrôles et il n’y a pas de lien entre l’ampleur de l’instabilité et la sévérité de la scoliose. Dans la deuxième étude, à l’aide d’un modèle neuromécanique, un poids plus grand aux signaux vestibulaires a été attribué aux patients. Dans la troisième étude, un problème sensorimoteur est également observé chez les jeunes adultes ayant une scoliose, excluant ainsi que le problème soit dû à la maturation du système nerveux. Dans une étude subséquente, des patients opérés pour réduire leur déformation du rachis, montrent également une réaction posturale de plus grande amplitude à la stimulation comparativement à des participants contrôle. Ces résultats suggèrent que l’anomalie sensorimotrice ne serait pas secondaire à la déformation. Finalement, un algorithme a été développé pour identifier les patients ayant un problème sensorimoteur. Les patients montrant un contrôle sensorimoteur anormal ont également une réponse vestibulomotrice plus grande et attribuent plus de poids aux informations vestibulaires. Globalement, les résultats de cette thèse montrent qu’un déficit sensorimoteur expliquerait l’apparition de la scoliose mais pas sa progression. Le dysfonctionnement sensorimoteur n’est pas présent chez tous les patients. L’algorithme permettant une classification de la performance sensorimotrice pourrait être utile pour de futures études cliniques.
Resumo:
Os profissionais de Saúde Oral, nomeadamente os Médicos Dentistas, têm vindo a enfrentar variados riscos decorrentes da sua atividade profissional. Apesar de haver um esforço constante no sentido de melhorar os equipamentos e materiais dentários, através dos avanços tecnológicos, estes não conseguiram, ainda, colmatar significativamente os problemas Músculo-Esqueléticos dos Médicos Dentistas. Este tipo de problemas surge enquanto estudantes, durante a prática clínica, muitas vezes devido às condições de trabalho, à inexperiência inerente, mas, principalmente, devido às postura e aos hábitos de trabalho errados que adquirem e que, consequentemente, persistem ao longo da sua vida profissional. A presente dissertação, aliada a uma vasta literatura relacionada, pretende alertar os profissionais de Saúde Oral, com foco nos Médicos Dentistas, para as patologias decorrentes das posturas erradas no exercício da Medicina Dentária, denominadas de Desordens Músculo-Esqueléticas relacionadas com o Trabalho, identificando-as, assim como aos fatores de risco que influenciam o seu aparecimento. A elaboração desta dissertação, pretende, ainda, propor exercícios de Ginástica Laboral a realizar entre consultas, como estratégia de prevenção para o surgimento de Lesões MúsculoEsqueléticas Relacionadas com o Trabalho nos Médicos Dentistas e alertar para a importância da Ergonomia na conceção de um Consultório Dentário. Para a presente revisão bibliográfica, foi realizada uma pesquisa bibliográfica com recurso a livros e artigos publicados em revistas, e que foram consultados nas Bibliotecas da Faculdade de Medicina Dentária da Universidade do Porto e da Universidade Fernando Pessoa. Procedeu-se à pesquisa por recurso aos motores de busca na internet, tais como, PubMed, Scielo, B-On e Medline, utilizando as seguintes palavras-chave, em conjunto ou individualmente: “Lesões por esforços repetitivos”; “Lesões Músculo-Esqueléticas”; “Ergonomia”; “Prevenção”; “Fatores de Risco”; “Ginástica Laboral”; “Work related musculoskeletal disorders”; “Dentistry”; “Ergonomics”; “Pain”; “Prevention”; “Risk factors”. Foram selecionados artigos entre 1987 e 2015, com relevância para o presente trabalho de dissertação de Mestrado, escritos em Português e Inglês. Desta forma, a pesquisa bibliográfica permitiu verificar a prevalência de Lesões Músculo-Esqueléticas Relacionadas com o Trabalho, no âmbito da Medicina Dentária, assim como os fatores de risco associados ao seu aparecimento. Foram ainda tecidas algumas recomendações de forma a contribuir para o bem-estar dos Médicos Dentistas, elucidando-os para a necessidade de adotar posturas corretas e usando a Ergonomia como base para a organização e conceção de um Consultório Dentário. Recorreu-se, ainda, à ilustração de um programa de Ginástica Laboral, através de dois cartazes elucidativos, com o objetivo de prevenir, corrigir e compensar este tipo de patologias do Médico Dentista, desde a sua prática clínica. Após a elaboração desta dissertação, acredita-se ser de extrema importância a aplicação de diretrizes ergonómicas na conceção de um Consultório Dentário, na organização de tarefas, no procedimento clínico, na adoção de posturas, no posto de trabalho, na localização do equipamento e na escolha dos instrumentos. Este método, para além de minimizar o risco do aparecimento de doenças profissionais, também simplifica tarefas, proporciona a correta comunicação entre o Médico Dentista e o assistente e melhora a qualidade e produtividade laboral, através da redução da fadiga física e mental e do aumento da confiança e bem-estar dos Médicos Dentistas.
Resumo:
Recently, there has been considerable interest in solving viscoelastic problems in 3D particularly with the improvement in modern computing power. In many applications the emphasis has been on economical algorithms which can cope with the extra complexity that the third dimension brings. Storage and computer time are of the essence. The advantage of the finite volume formulation is that a large amount of memory space is not required. Iterative methods rather than direct methods can be used to solve the resulting linear systems efficiently.
Resumo:
La programmation par contraintes est une technique puissante pour résoudre, entre autres, des problèmes d’ordonnancement de grande envergure. L’ordonnancement vise à allouer dans le temps des tâches à des ressources. Lors de son exécution, une tâche consomme une ressource à un taux constant. Généralement, on cherche à optimiser une fonction objectif telle la durée totale d’un ordonnancement. Résoudre un problème d’ordonnancement signifie trouver quand chaque tâche doit débuter et quelle ressource doit l’exécuter. La plupart des problèmes d’ordonnancement sont NP-Difficiles. Conséquemment, il n’existe aucun algorithme connu capable de les résoudre en temps polynomial. Cependant, il existe des spécialisations aux problèmes d’ordonnancement qui ne sont pas NP-Complet. Ces problèmes peuvent être résolus en temps polynomial en utilisant des algorithmes qui leur sont propres. Notre objectif est d’explorer ces algorithmes d’ordonnancement dans plusieurs contextes variés. Les techniques de filtrage ont beaucoup évolué dans les dernières années en ordonnancement basé sur les contraintes. La proéminence des algorithmes de filtrage repose sur leur habilité à réduire l’arbre de recherche en excluant les valeurs des domaines qui ne participent pas à des solutions au problème. Nous proposons des améliorations et présentons des algorithmes de filtrage plus efficaces pour résoudre des problèmes classiques d’ordonnancement. De plus, nous présentons des adaptations de techniques de filtrage pour le cas où les tâches peuvent être retardées. Nous considérons aussi différentes propriétés de problèmes industriels et résolvons plus efficacement des problèmes où le critère d’optimisation n’est pas nécessairement le moment où la dernière tâche se termine. Par exemple, nous présentons des algorithmes à temps polynomial pour le cas où la quantité de ressources fluctue dans le temps, ou quand le coût d’exécuter une tâche au temps t dépend de t.
Resumo:
The challenge of detecting a change in the distribution of data is a sequential decision problem that is relevant to many engineering solutions, including quality control and machine and process monitoring. This dissertation develops techniques for exact solution of change-detection problems with discrete time and discrete observations. Change-detection problems are classified as Bayes or minimax based on the availability of information on the change-time distribution. A Bayes optimal solution uses prior information about the distribution of the change time to minimize the expected cost, whereas a minimax optimal solution minimizes the cost under the worst-case change-time distribution. Both types of problems are addressed. The most important result of the dissertation is the development of a polynomial-time algorithm for the solution of important classes of Markov Bayes change-detection problems. Existing techniques for epsilon-exact solution of partially observable Markov decision processes have complexity exponential in the number of observation symbols. A new algorithm, called constellation induction, exploits the concavity and Lipschitz continuity of the value function, and has complexity polynomial in the number of observation symbols. It is shown that change-detection problems with a geometric change-time distribution and identically- and independently-distributed observations before and after the change are solvable in polynomial time. Also, change-detection problems on hidden Markov models with a fixed number of recurrent states are solvable in polynomial time. A detailed implementation and analysis of the constellation-induction algorithm are provided. Exact solution methods are also established for several types of minimax change-detection problems. Finite-horizon problems with arbitrary observation distributions are modeled as extensive-form games and solved using linear programs. Infinite-horizon problems with linear penalty for detection delay and identically- and independently-distributed observations can be solved in polynomial time via epsilon-optimal parameterization of a cumulative-sum procedure. Finally, the properties of policies for change-detection problems are described and analyzed. Simple classes of formal languages are shown to be sufficient for epsilon-exact solution of change-detection problems, and methods for finding minimally sized policy representations are described.