986 resultados para 1ST-PRINCIPLE APPROACH


Relevância:

30.00% 30.00%

Publicador:

Resumo:

Cette thèse contribue à une théorie générale de la conception du projet. S’inscrivant dans une demande marquée par les enjeux du développement durable, l’objectif principal de cette recherche est la contribution d’un modèle théorique de la conception permettant de mieux situer l’utilisation des outils et des normes d’évaluation de la durabilité d’un projet. Les principes fondamentaux de ces instruments normatifs sont analysés selon quatre dimensions : ontologique, méthodologique, épistémologique et téléologique. Les indicateurs de certains effets contre-productifs reliés, en particulier, à la mise en compte de ces normes confirment la nécessité d’une théorie du jugement qualitatif. Notre hypothèse principale prend appui sur le cadre conceptuel offert par la notion de « principe de précaution » dont les premières formulations remontent du début des années 1970, et qui avaient précisément pour objectif de remédier aux défaillances des outils et méthodes d’évaluation scientifique traditionnelles. La thèse est divisée en cinq parties. Commençant par une revue historique des modèles classiques des théories de la conception (design thinking) elle se concentre sur l’évolution des modalités de prise en compte de la durabilité. Dans cette perspective, on constate que les théories de la « conception verte » (green design) datant du début des années 1960 ou encore, les théories de la « conception écologique » (ecological design) datant des années 1970 et 1980, ont finalement convergé avec les récentes théories de la «conception durable» (sustainable design) à partir du début des années 1990. Les différentes approches du « principe de précaution » sont ensuite examinées sous l’angle de la question de la durabilité du projet. Les standards d’évaluation des risques sont comparés aux approches utilisant le principe de précaution, révélant certaines limites lors de la conception d’un projet. Un premier modèle théorique de la conception intégrant les principales dimensions du principe de précaution est ainsi esquissé. Ce modèle propose une vision globale permettant de juger un projet intégrant des principes de développement durable et se présente comme une alternative aux approches traditionnelles d’évaluation des risques, à la fois déterministes et instrumentales. L’hypothèse du principe de précaution est dès lors proposée et examinée dans le contexte spécifique du projet architectural. Cette exploration débute par une présentation de la notion classique de «prudence» telle qu’elle fut historiquement utilisée pour guider le jugement architectural. Qu’en est-il par conséquent des défis présentés par le jugement des projets d’architecture dans la montée en puissance des méthodes d’évaluation standardisées (ex. Leadership Energy and Environmental Design; LEED) ? La thèse propose une réinterprétation de la théorie de la conception telle que proposée par Donald A. Schön comme une façon de prendre en compte les outils d’évaluation tels que LEED. Cet exercice révèle cependant un obstacle épistémologique qui devra être pris en compte dans une reformulation du modèle. En accord avec l’épistémologie constructiviste, un nouveau modèle théorique est alors confronté à l’étude et l’illustration de trois concours d'architecture canadienne contemporains ayant adopté la méthode d'évaluation de la durabilité normalisée par LEED. Une série préliminaire de «tensions» est identifiée dans le processus de la conception et du jugement des projets. Ces tensions sont ensuite catégorisées dans leurs homologues conceptuels, construits à l’intersection du principe de précaution et des théories de la conception. Ces tensions se divisent en quatre catégories : (1) conceptualisation - analogique/logique; (2) incertitude - épistémologique/méthodologique; (3) comparabilité - interprétation/analytique, et (4) proposition - universalité/ pertinence contextuelle. Ces tensions conceptuelles sont considérées comme autant de vecteurs entrant en corrélation avec le modèle théorique qu’elles contribuent à enrichir sans pour autant constituer des validations au sens positiviste du terme. Ces confrontations au réel permettent de mieux définir l’obstacle épistémologique identifié précédemment. Cette thèse met donc en évidence les impacts généralement sous-estimés, des normalisations environnementales sur le processus de conception et de jugement des projets. Elle prend pour exemple, de façon non restrictive, l’examen de concours d'architecture canadiens pour bâtiments publics. La conclusion souligne la nécessité d'une nouvelle forme de « prudence réflexive » ainsi qu’une utilisation plus critique des outils actuels d’évaluation de la durabilité. Elle appelle une instrumentalisation fondée sur l'intégration globale, plutôt que sur l'opposition des approches environnementales.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Kin selection theorists argue that evolution in social contexts will lead organisms to behave as if maximizing their inclusive, as opposed to personal, fitness. The inclusive fitness concept allows biologists to treat organisms as akin to rational agents seeking to maximize a utility function. Here we develop this idea and place it on a firm footing by employing a standard decision-theoretic methodology. We show how the principle of inclusive fitness maximization and a related principle of quasi-inclusive fitness maximization can be derived from axioms on an individual’s ‘as if preferences’ (binary choices). Our results help integrate evolutionary theory and rational choice theory, help draw out the behavioural implications of inclusive fitness maximization, and point to a possible way in which evolution could lead organisms to implement it.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Si les principes d’utilisabilité guident la conception de solutions de design interactif pour s’assurer que celles-ci soient « utilisables », quels principes guident la conception d’objets interactifs pour s’assurer que l’expérience subjective de l’usager (UX) soit adéquate et mémorable? Que manque-t-il au cadre de l‘UX pour expliquer, comprendre, et anticiper en tant que designer une expérience mémorable (‘an experience’; Dewey, 1934)? La question centrale est issue d’une double problématique : (1) le cadre théorique de l’UX est incomplet, et (2) les processus et capacités des designers ne sont pas considérés et utilisés à leur pleine capacité en conception UX. Pour répondre à cette question, nous proposons de compléter les modèles de l’UX avec la notion d’expérience autotélique qui appartient principalement à deux cadres théoriques ayant bien cerné l’expérience subjective, soit l’expérience optimale (ou Flow) de Csikszentmihalyi (1988) et l’expérience esthétique selon Schaeffer (2001). L’autotélie est une dimension interne du Flow alors qu’elle couvre toute l’expérience esthétique. L’autotélie est une expérience d’éveil au moment même de l’interaction. Cette prise de conscience est accompagnée d’une imperceptible tension de vouloir faire durer ce moment pour faire durer le plaisir qu’il génère. Trois études exploratoires ont été faites, s’appuyant sur une analyse faite à partir d’un cadre théorique en trois parties : le Flow, les signes d’activité non verbale (les gestes physiques) et verbale (le discours) ont été évalués pour voir comment ceux-ci s’associent. Nos résultats tendent à prouver que les processus spatiaux jouent un rôle de premier plan dans l’expérience autotélique et par conséquent dans une UX optimale. De plus, ils suggèrent que les expériences pragmatique et autotélique sont ancrées dans un seul et même contenu, et que leur différence tient au type d’attention que le participant porte sur l’interaction, l’attention ordinaire ou de type autotélique. Ces résultats nous ont menés à proposer un modèle pour la conception UX. L’élément nouveau, resté jusqu’alors inaperçu, consiste à s’assurer que l’interface (au sens large) appelle une attitude réceptive à l’inattendu, pour qu’une information puisse déclencher les processus spatiaux, offrant une opportunité de passer de l’attention ordinaire à l’attention autotélique. Le nouveau modèle ouvre la porte à une meilleure valorisation des habiletés et processus du designer au sein de l’équipe multidisciplinaire en conception UX.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Plusieurs problèmes liés à l'utilisation de substances et méthodes interdites de dopage dans les sports posent de grands défis à la gouvernance antidopage. Afin de lutter contre le dopage, certains pays ont mis en oeuvre des cadres juridiques basés exclusivement sur le droit pénal tandis que d'autres pays ont plutôt misé sur des mécanismes et organismes spécialisés trouvant fondement en droit privé ou sur un régime hybride de droit public et privé. Ces différentes approches réglementaires ont pour conséquence de faire en sorte qu’il est très difficile de lutter efficacement contre le dopage dans les sports, notamment parce que leur exécution requiert un degré de collaboration internationale et une participation concertée des autorités publiques qui est difficile à mettre en place. À l’heure actuelle, on peut par exemple observer que les États n’arrivent pas à contrer efficacement la participation des syndicats et organisations transnationales liés au crime organisé dans le marché du dopage, ni à éliminer des substances et méthodes de dopage interdites par la réglementation. Par ailleurs, la gouvernance antidopage basée sur les règles prescrites par l’Agence mondiale antidopage prévoit des règles et des normes distinctes de dopage distinguant entre deux catégories de personnes, les athlètes et les autres, plaçant ainsi les premiers dans une position désavantageuse. Par exemple, le standard de responsabilité stricte sans faute ou négligence imposé aux athlètes exige moins que la preuve hors de tout doute raisonnable et permet l'utilisation de preuves circonstancielles pour établir la violation des règles antidopages. S'appliquant pour prouver le dopage, ce standard mine le principe de la présomption d'innocence et le principe suivant lequel une personne ne devrait pas se voir imposer une peine sans loi. D’ailleurs, le nouveau Code de 2015 de l’Agence attribuera aux organisations nationales antidopage (ONADs) des pouvoirs d'enquête et de collecte de renseignements et ajoutera de nouvelles catégories de dopage non-analytiques, réduisant encore plus les droits des athlètes. Dans cette thèse, nous discutons plus particulièrement du régime réglementaire de l’Agence et fondé sur le droit privé parce qu’il ne parvient pas à répondre aux besoins actuels de gouvernance mondiale antidopage. Nous préconisons donc l’adoption d’une nouvelle approche de gouvernance antidopage où la nature publique et pénale mondiale du dopage est clairement reconnue. Cette reconnaissance combiné avec un modèle de gouvernance adapté basé sur une approche pluraliste du droit administratif global produira une réglementation et une administration antidopage mieux acceptée chez les athlètes et plus efficace sur le plan des résultats. Le nouveau modèle de gouvernance que nous proposons nécessitera toutefois que tous les acteurs étatiques et non-étatiques ajustent leur cadre de gouvernance en tenant compte de cette nouvelle approche, et ce, afin de confronter les défis actuels et de régler de manière plus satisfaisante les problèmes liés à la gouvernance mondiale du dopage dans les sports.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper underlines a methodology for translating text from English into the Dravidian language, Malayalam using statistical models. By using a monolingual Malayalam corpus and a bilingual English/Malayalam corpus in the training phase, the machine automatically generates Malayalam translations of English sentences. This paper also discusses a technique to improve the alignment model by incorporating the parts of speech information into the bilingual corpus. Removing the insignificant alignments from the sentence pairs by this approach has ensured better training results. Pre-processing techniques like suffix separation from the Malayalam corpus and stop word elimination from the bilingual corpus also proved to be effective in training. Various handcrafted rules designed for the suffix separation process which can be used as a guideline in implementing suffix separation in Malayalam language are also presented in this paper. The structural difference between the English Malayalam pair is resolved in the decoder by applying the order conversion rules. Experiments conducted on a sample corpus have generated reasonably good Malayalam translations and the results are verified with F measure, BLEU and WER evaluation metrics

Relevância:

30.00% 30.00%

Publicador:

Resumo:

During the past few years, there has been much discussion of a shift from rule-based systems to principle-based systems for natural language processing. This paper outlines the major computational advantages of principle-based parsing, its differences from the usual rule-based approach, and surveys several existing principle-based parsing systems used for handling languages as diverse as Warlpiri, English, and Spanish, as well as language translation.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The human visual ability to perceive depth looks like a puzzle. We perceive three-dimensional spatial information quickly and efficiently by using the binocular stereopsis of our eyes and, what is mote important the learning of the most common objects which we achieved through living. Nowadays, modelling the behaviour of our brain is a fiction, that is why the huge problem of 3D perception and further, interpretation is split into a sequence of easier problems. A lot of research is involved in robot vision in order to obtain 3D information of the surrounded scene. Most of this research is based on modelling the stereopsis of humans by using two cameras as if they were two eyes. This method is known as stereo vision and has been widely studied in the past and is being studied at present, and a lot of work will be surely done in the future. This fact allows us to affirm that this topic is one of the most interesting ones in computer vision. The stereo vision principle is based on obtaining the three dimensional position of an object point from the position of its projective points in both camera image planes. However, before inferring 3D information, the mathematical models of both cameras have to be known. This step is known as camera calibration and is broadly describes in the thesis. Perhaps the most important problem in stereo vision is the determination of the pair of homologue points in the two images, known as the correspondence problem, and it is also one of the most difficult problems to be solved which is currently investigated by a lot of researchers. The epipolar geometry allows us to reduce the correspondence problem. An approach to the epipolar geometry is describes in the thesis. Nevertheless, it does not solve it at all as a lot of considerations have to be taken into account. As an example we have to consider points without correspondence due to a surface occlusion or simply due to a projection out of the camera scope. The interest of the thesis is focused on structured light which has been considered as one of the most frequently used techniques in order to reduce the problems related lo stereo vision. Structured light is based on the relationship between a projected light pattern its projection and an image sensor. The deformations between the pattern projected into the scene and the one captured by the camera, permits to obtain three dimensional information of the illuminated scene. This technique has been widely used in such applications as: 3D object reconstruction, robot navigation, quality control, and so on. Although the projection of regular patterns solve the problem of points without match, it does not solve the problem of multiple matching, which leads us to use hard computing algorithms in order to search the correct matches. In recent years, another structured light technique has increased in importance. This technique is based on the codification of the light projected on the scene in order to be used as a tool to obtain an unique match. Each token of light is imaged by the camera, we have to read the label (decode the pattern) in order to solve the correspondence problem. The advantages and disadvantages of stereo vision against structured light and a survey on coded structured light are related and discussed. The work carried out in the frame of this thesis has permitted to present a new coded structured light pattern which solves the correspondence problem uniquely and robust. Unique, as each token of light is coded by a different word which removes the problem of multiple matching. Robust, since the pattern has been coded using the position of each token of light with respect to both co-ordinate axis. Algorithms and experimental results are included in the thesis. The reader can see examples 3D measurement of static objects, and the more complicated measurement of moving objects. The technique can be used in both cases as the pattern is coded by a single projection shot. Then it can be used in several applications of robot vision. Our interest is focused on the mathematical study of the camera and pattern projector models. We are also interested in how these models can be obtained by calibration, and how they can be used to obtained three dimensional information from two correspondence points. Furthermore, we have studied structured light and coded structured light, and we have presented a new coded structured light pattern. However, in this thesis we started from the assumption that the correspondence points could be well-segmented from the captured image. Computer vision constitutes a huge problem and a lot of work is being done at all levels of human vision modelling, starting from a)image acquisition; b) further image enhancement, filtering and processing, c) image segmentation which involves thresholding, thinning, contour detection, texture and colour analysis, and so on. The interest of this thesis starts in the next step, usually known as depth perception or 3D measurement.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The climate belongs to the class of non-equilibrium forced and dissipative systems, for which most results of quasi-equilibrium statistical mechanics, including the fluctuation-dissipation theorem, do not apply. In this paper we show for the first time how the Ruelle linear response theory, developed for studying rigorously the impact of perturbations on general observables of non-equilibrium statistical mechanical systems, can be applied with great success to analyze the climatic response to general forcings. The crucial value of the Ruelle theory lies in the fact that it allows to compute the response of the system in terms of expectation values of explicit and computable functions of the phase space averaged over the invariant measure of the unperturbed state. We choose as test bed a classical version of the Lorenz 96 model, which, in spite of its simplicity, has a well-recognized prototypical value as it is a spatially extended one-dimensional model and presents the basic ingredients, such as dissipation, advection and the presence of an external forcing, of the actual atmosphere. We recapitulate the main aspects of the general response theory and propose some new general results. We then analyze the frequency dependence of the response of both local and global observables to perturbations having localized as well as global spatial patterns. We derive analytically several properties of the corresponding susceptibilities, such as asymptotic behavior, validity of Kramers-Kronig relations, and sum rules, whose main ingredient is the causality principle. We show that all the coefficients of the leading asymptotic expansions as well as the integral constraints can be written as linear function of parameters that describe the unperturbed properties of the system, such as its average energy. Some newly obtained empirical closure equations for such parameters allow to define such properties as an explicit function of the unperturbed forcing parameter alone for a general class of chaotic Lorenz 96 models. We then verify the theoretical predictions from the outputs of the simulations up to a high degree of precision. The theory is used to explain differences in the response of local and global observables, to define the intensive properties of the system, which do not depend on the spatial resolution of the Lorenz 96 model, and to generalize the concept of climate sensitivity to all time scales. We also show how to reconstruct the linear Green function, which maps perturbations of general time patterns into changes in the expectation value of the considered observable for finite as well as infinite time. Finally, we propose a simple yet general methodology to study general Climate Change problems on virtually any time scale by resorting to only well selected simulations, and by taking full advantage of ensemble methods. The specific case of globally averaged surface temperature response to a general pattern of change of the CO2 concentration is discussed. We believe that the proposed approach may constitute a mathematically rigorous and practically very effective way to approach the problem of climate sensitivity, climate prediction, and climate change from a radically new perspective.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Parkinson’s disease is a clinical syndrome manifesting with slowness and instability. As it is a progressive disease with varying symptoms, repeated assessments are necessary to determine the outcome of treatment changes in the patient. In the recent past, a computer-based method was developed to rate impairment in spiral drawings. The downside of this method is that it cannot separate the bradykinetic and dyskinetic spiral drawings. This work intends to construct the computer method which can overcome this weakness by using the Hilbert-Huang Transform (HHT) of tangential velocity. The work is done under supervised learning, so a target class is used which is acquired from a neurologist using a web interface. After reducing the dimension of HHT features by using PCA, classification is performed. C4.5 classifier is used to perform the classification. Results of the classification are close to random guessing which shows that the computer method is unsuccessful in assessing the cause of drawing impairment in spirals when evaluated against human ratings. One promising reason is that there is no difference between the two classes of spiral drawings. Displaying patients self ratings along with the spirals in the web application is another possible reason for this, as the neurologist may have relied too much on this in his own ratings.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Parkinson’s disease (PD) is an increasing neurological disorder in an aging society. The motor and non-motor symptoms of PD advance with the disease progression and occur in varying frequency and duration. In order to affirm the full extent of a patient’s condition, repeated assessments are necessary to adjust medical prescription. In clinical studies, symptoms are assessed using the unified Parkinson’s disease rating scale (UPDRS). On one hand, the subjective rating using UPDRS relies on clinical expertise. On the other hand, it requires the physical presence of patients in clinics which implies high logistical costs. Another limitation of clinical assessment is that the observation in hospital may not accurately represent a patient’s situation at home. For such reasons, the practical frequency of tracking PD symptoms may under-represent the true time scale of PD fluctuations and may result in an overall inaccurate assessment. Current technologies for at-home PD treatment are based on data-driven approaches for which the interpretation and reproduction of results are problematic.  The overall objective of this thesis is to develop and evaluate unobtrusive computer methods for enabling remote monitoring of patients with PD. It investigates first-principle data-driven model based novel signal and image processing techniques for extraction of clinically useful information from audio recordings of speech (in texts read aloud) and video recordings of gait and finger-tapping motor examinations. The aim is to map between PD symptoms severities estimated using novel computer methods and the clinical ratings based on UPDRS part-III (motor examination). A web-based test battery system consisting of self-assessment of symptoms and motor function tests was previously constructed for a touch screen mobile device. A comprehensive speech framework has been developed for this device to analyze text-dependent running speech by: (1) extracting novel signal features that are able to represent PD deficits in each individual component of the speech system, (2) mapping between clinical ratings and feature estimates of speech symptom severity, and (3) classifying between UPDRS part-III severity levels using speech features and statistical machine learning tools. A novel speech processing method called cepstral separation difference showed stronger ability to classify between speech symptom severities as compared to existing features of PD speech. In the case of finger tapping, the recorded videos of rapid finger tapping examination were processed using a novel computer-vision (CV) algorithm that extracts symptom information from video-based tapping signals using motion analysis of the index-finger which incorporates a face detection module for signal calibration. This algorithm was able to discriminate between UPDRS part III severity levels of finger tapping with high classification rates. Further analysis was performed on novel CV based gait features constructed using a standard human model to discriminate between a healthy gait and a Parkinsonian gait. The findings of this study suggest that the symptom severity levels in PD can be discriminated with high accuracies by involving a combination of first-principle (features) and data-driven (classification) approaches. The processing of audio and video recordings on one hand allows remote monitoring of speech, gait and finger-tapping examinations by the clinical staff. On the other hand, the first-principles approach eases the understanding of symptom estimates for clinicians. We have demonstrated that the selected features of speech, gait and finger tapping were able to discriminate between symptom severity levels, as well as, between healthy controls and PD patients with high classification rates. The findings support suitability of these methods to be used as decision support tools in the context of PD assessment.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In the general relativistic description of gravitation, geometry replaces the concept of force. This is possible because of the universal character of free fall, and would break down in its absence. on the other hand, the teleparallel version of general relativity is a gauge theory for the translation group and, as such, describes the gravitational interaction by a force similar to the Lorentz force of electromagnetism, a non-universal interaction. Relying on this analogy it is shown that, although the geometric description of general relativity necessarily requires the existence of the equivalence principle, the teleparallel gauge approach remains a consistent theory for gravitation in its absence.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Fundação de Amparo à Pesquisa do Estado de São Paulo (FAPESP)

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Complex Kohn variational principle is applied to the numerical solution of the fully off-shell Lippmann-Schwinger equation for nucleon-nucleon scattering for various partial waves including the coupled S-3(1), D-3(1), channel. Analytic expressions are obtained for all the integrals in the method for a suitable choice of expansion functions. Calculations with the partial waves S-1(0), P-1(1), D-1(2), and S-3(1)-D-3(1) of the Reid soft core potential show that the method converges faster than other solution schemes not only for the phase shift but also for the off-shell t matrix elements. We also show that it is trivial to modify this variational principle in order to make it suitable for bound-state calculation. The bound-state approach is illustrated for the S-3(1)-D-3(1) channel of the Reid soft-core potential for calculating the deuteron binding, wave function, and the D state asymptotic parameters. (c) 1995 Academic Press, Inc.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this work we show how to define the action of a scalar field such that the Robin boundary condition is implemented dynamically, i.e. as a consequence of the stationary action principle. We discuss the quantization of that system via functional integration. Using this formalism, we derive an expression for the Casimir energy of a massless scalar field under Robin boundary conditions on a pair of parallel plates, characterized by constants c(1) and c(2). Some special cases are discussed; in particular, we show that for some values of cl and c(2) the Casimir energy as a function of the distance between the plates presents a minimum. We also discuss the renormalization at one-loop order of the two-point Green function in the philambda(4) theory subject to the Robin boundary condition on a plate.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The δ-expansion is a nonperturbative approach for field theoretic models which combines the techniques of perturbation theory and the variational principle. Different ways of implementing the principle of minimal sensitivity to the δ-expansion produce in general different results for observables. For illustration we use the Nambu-Jona-Lasinio model for chiral symmetry restoration at finite density and compare results with those obtained with the Hartree-Fock approximation.