971 resultados para non-ideal problems
Resumo:
The paper presents some contemporary approaches to spatial environmental data analysis. The main topics are concentrated on the decision-oriented problems of environmental spatial data mining and modeling: valorization and representativity of data with the help of exploratory data analysis, spatial predictions, probabilistic and risk mapping, development and application of conditional stochastic simulation models. The innovative part of the paper presents integrated/hybrid model-machine learning (ML) residuals sequential simulations-MLRSS. The models are based on multilayer perceptron and support vector regression ML algorithms used for modeling long-range spatial trends and sequential simulations of the residuals. NIL algorithms deliver non-linear solution for the spatial non-stationary problems, which are difficult for geostatistical approach. Geostatistical tools (variography) are used to characterize performance of ML algorithms, by analyzing quality and quantity of the spatially structured information extracted from data with ML algorithms. Sequential simulations provide efficient assessment of uncertainty and spatial variability. Case study from the Chernobyl fallouts illustrates the performance of the proposed model. It is shown that probability mapping, provided by the combination of ML data driven and geostatistical model based approaches, can be efficiently used in decision-making process. (C) 2003 Elsevier Ltd. All rights reserved.
Resumo:
Although neuroimaging research has evidenced specific responses to visual food stimuli based on their nutritional quality (e.g., energy density, fat content), brain processes underlying portion size selection remain largely unexplored. We identified spatio-temporal brain dynamics in response to meal images varying in portion size during a task of ideal portion selection for prospective lunch intake and expected satiety. Brain responses to meal portions judged by the participants as 'too small', 'ideal' and 'too big' were measured by means of electro-encephalographic (EEG) recordings in 21 normal-weight women. During an early stage of meal viewing (105-145ms), data showed an incremental increase of the head-surface global electric field strength (quantified via global field power; GFP) as portion judgments ranged from 'too small' to 'too big'. Estimations of neural source activity revealed that brain regions underlying this effect were located in the insula, middle frontal gyrus and middle temporal gyrus, and are similar to those reported in previous studies investigating responses to changes in food nutritional content. In contrast, during a later stage (230-270ms), GFP was maximal for the 'ideal' relative to the 'non-ideal' portion sizes. Greater neural source activity to 'ideal' vs. 'non-ideal' portion sizes was observed in the inferior parietal lobule, superior temporal gyrus and mid-posterior cingulate gyrus. Collectively, our results provide evidence that several brain regions involved in attention and adaptive behavior track 'ideal' meal portion sizes as early as 230ms during visual encounter. That is, responses do not show an increase paralleling the amount of food viewed (and, in extension, the amount of reward), but are shaped by regulatory mechanisms.
Resumo:
This work presents new, efficient Markov chain Monte Carlo (MCMC) simulation methods for statistical analysis in various modelling applications. When using MCMC methods, the model is simulated repeatedly to explore the probability distribution describing the uncertainties in model parameters and predictions. In adaptive MCMC methods based on the Metropolis-Hastings algorithm, the proposal distribution needed by the algorithm learns from the target distribution as the simulation proceeds. Adaptive MCMC methods have been subject of intensive research lately, as they open a way for essentially easier use of the methodology. The lack of user-friendly computer programs has been a main obstacle for wider acceptance of the methods. This work provides two new adaptive MCMC methods: DRAM and AARJ. The DRAM method has been built especially to work in high dimensional and non-linear problems. The AARJ method is an extension to DRAM for model selection problems, where the mathematical formulation of the model is uncertain and we want simultaneously to fit several different models to the same observations. The methods were developed while keeping in mind the needs of modelling applications typical in environmental sciences. The development work has been pursued while working with several application projects. The applications presented in this work are: a winter time oxygen concentration model for Lake Tuusulanjärvi and adaptive control of the aerator; a nutrition model for Lake Pyhäjärvi and lake management planning; validation of the algorithms of the GOMOS ozone remote sensing instrument on board the Envisat satellite of European Space Agency and the study of the effects of aerosol model selection on the GOMOS algorithm.
Resumo:
We propose a new family of risk measures, called GlueVaR, within the class of distortion risk measures. Analytical closed-form expressions are shown for the most frequently used distribution functions in financial and insurance applications. The relationship between Glue-VaR, Value-at-Risk (VaR) and Tail Value-at-Risk (TVaR) is explained. Tail-subadditivity is investigated and it is shown that some GlueVaR risk measures satisfy this property. An interpretation in terms of risk attitudes is provided and a discussion is given on the applicability in non-financial problems such as health, safety, environmental or catastrophic risk management
Resumo:
Print quality and the printability of paper are very important attributes when modern printing applications are considered. In prints containing images, high print quality is a basic requirement. Tone unevenness and non uniform glossiness of printed products are the most disturbing factors influencing overall print quality. These defects are caused by non ideal interactions of paper, ink and printing devices in high speed printing processes. Since print quality is a perceptive characteristic, the measurement of unevenness according to human vision is a significant problem. In this thesis, the mottling phenomenon is studied. Mottling is a printing defect characterized by a spotty, non uniform appearance in solid printed areas. Print mottle is usually the result of uneven ink lay down or non uniform ink absorption across the paper surface, especially visible in mid tone imagery or areas of uniform color, such as solids and continuous tone screen builds. By using existing knowledge on visual perception and known methods to quantify print tone variation, a new method for print unevenness evaluation is introduced. The method is compared to previous results in the field and is supported by psychometric experiments. Pilot studies are made to estimate the effect of optical paper characteristics prior to printing, on the unevenness of the printed area after printing. Instrumental methods for print unevenness evaluation have been compared and the results of the comparison indicate that the proposed method produces better results in terms of visual evaluation correspondence. The method has been successfully implemented as ail industrial application and is proved to be a reliable substitute to visual expertise.
Resumo:
Huoli ympäristön tilasta ja fossiilisten polttoaineiden hinnan nousu ovat vauhdittaneet tutkimusta uusien energialähteiden löytämiseksi. Polttokennot ovat yksi lupaavimmista tekniikoista etenkin hajautetun energiantuotannon, varavoimalaitosten sekä liikennevälineiden alueella. Polttokenno on tehonlähteenä kuitenkin hyvin epäideaalinen, ja se asettaa tehoelektroniikalle lukuisia erityisvaatimuksia. Polttokennon kytkeminen sähköverkkoon on tavallisesti toteutettu käyttämällä galvaanisesti erottavaa DC/DC hakkuria sekä vaihtosuuntaajaa sarjassa. Polttokennon kulumisen estämiseksi tehoelektroniikalta vaaditaan tarkkaa polttokennon lähtövirran hallintaa. Perinteisesti virran hallinta on toteutettu säätämällä hakkurin tulovirtaa PI (Proportional and Integral) tai PID (Proportional, Integral and Derivative) -säätimellä. Hakkurin epälineaarisuudesta johtuen tällainen ratkaisu ei välttämättä toimi kaukana linearisointipisteestä. Lisäksi perinteiset säätimet ovat herkkiä mallinnusvirheille. Tässä diplomityössä on esitetty polttokennon jännitettä nostavan hakkurin tilayhtälökeskiarvoistusmenetelmään perustuva malli, sekä malliin perustuva diskreettiaikainen integroiva liukuvan moodin säätö. Esitetty säätö on luonteeltaan epälineaarinen ja se soveltuu epälineaaristen ja heikosti tunnettujen järjestelmien säätämiseen.
Resumo:
Tutkielman tarkoituksena on perehtyä siihen, kuinka laajasti tavaramerkin haltija voi kieltää toista elinkeinoharjoittajaa käyttämästä samaa tai samankaltaista merkkiä, kun sitä käytetään samoissa, samankaltaisissa tai jopa täysin erilaisissa tavaroissa tai palveluissa. Merkin haltijan kielto-oikeuden laajuuteen paneudutaan tutkimalla ensiksi EU:n tavaramerkkidirektiiviä ja Suomen tavaramerkkilakia ja oikeuskäytäntöjä. Tämän jälkeen tutkitaan, kuinka laaja kielto-oikeus merkin haltijalla on Kiinassa. Pääasiallinen tutkimusmetodi on lainopillinen, mutta myös oikeusvertailua tehdään Suomen ja Kiinan välillä. Tavaramerkin suojaamisessa on Suomessa selkeästi havaittavissa Chicagon koulukunnan teorian hyväksyminen, joka tavaramerkkioikeuteen sovellettuna korostaa merkin haltijan laajaa suojaa. Teorian mukaan tavaramerkin brandiksi luomiseen liittyviä investointeja pitää suojata ja estää sen ansioton hyödyntäminen. Päinvastaista teoriaa edustaa Harvardin koulukunta, jonka teorian piirteitä on havaittavissa Kiinan tavaramerkkioikeudessa. Se korostaa tavaramerkin kapeaa suojaa. Sallimalla kilpailijoiden jäljitellä tunnettua tavaramerkkiä parannetaan kilpailijan asemaa markkinoilla. Toisaalta ulkomaiset merkin haltijat kohtaavat Kiinassa myös paljon ei-tavaramerkkioikeudellisia ongelmia, jotka vaikeuttavat tavaramerkin suojaamista Kiinan markkinoilla.
Resumo:
In order to assess to the degree to which the provision of economic incentives can result in justified inequalities, we need to distinguish between compensatory incentive payments and non-compensatory incentive payments. From a liberal egalitarian perspective, economic inequalities traceable to the provision of compensatory incentive payments are generally justifiable. However, economic inequalities created by the provision of non-compensatory incentive payments are more problematic. I argue that in non-ideal circumstances justice may permit and even require the provision of non-compensatory incentives despite the fact that those who receive non-compensatory payments are not entitled to them. In some circumstances, justice may require us to accede to unreasonable demands for incentive payments by hard bargainers. This leads to a kind of paradox: from a systemic point of view, non-compensatory incentive payments can be justified even though those who receive them have no just claim to them.
Resumo:
Le but de cette thèse est de raffiner et de mieux comprendre l'utilisation de la méthode spectroscopique, qui compare des spectres visibles de naines blanches à atmosphère riche en hydrogène (DA) à des spectres synthétiques pour en déterminer les paramètres atmosphériques (température effective et gravité de surface). Notre approche repose principalement sur le développement de modèles de spectres améliorés, qui proviennent eux-mêmes de modèles d'atmosphère de naines blanches de type DA. Nous présentons une nouvelle grille de spectres synthétiques de DA avec la première implémentation cohérente de la théorie du gaz non-idéal de Hummer & Mihalas et de la théorie unifiée de l'élargissement Stark de Vidal, Cooper & Smith. Cela permet un traitement adéquat du chevauchement des raies de la série de Balmer, sans la nécessité d'un paramètre libre. Nous montrons que ces spectres améliorés prédisent des gravités de surface qui sont plus stables en fonction de la température effective. Nous étudions ensuite le problème de longue date des gravités élevées pour les DA froides. L'hypothèse de Bergeron et al., selon laquelle les atmosphères sont contaminées par de l'hélium, est confrontée aux observations. À l'aide de spectres haute résolution récoltés au télescope Keck à Hawaii, nous trouvons des limites supérieures sur la quantité d'hélium dans les atmosphères de près de 10 fois moindres que celles requises par le scénario de Bergeron et al. La grille de spectres conçue dans ces travaux est ensuite appliquée à une nouvelle analyse spectroscopique de l'échantillon de DA du SDSS. Notre approche minutieuse permet de définir un échantillon plus propre et d'identifier un nombre important de naines blanches binaires. Nous déterminons qu'une coupure à un rapport signal-sur-bruit S/N > 15 optimise la grandeur et la qualité de l'échantillon pour calculer la masse moyenne, pour laquelle nous trouvons une valeur de 0.613 masse solaire. Finalement, huit nouveaux modèles 3D de naines blanches utilisant un traitement d'hydrodynamique radiative de la convection sont présentés. Nous avons également calculé des modèles avec la même physique, mais avec une traitement standard 1D de la convection avec la théorie de la longueur de mélange. Un analyse différentielle entre ces deux séries de modèles montre que les modèles 3D prédisent des gravités considérablement plus basses. Nous concluons que le problème des gravités élevées dans les naines blanches DA froides est fort probablement causé par une faiblesse dans la théorie de la longueur de mélange.
Resumo:
El trastorno del desarrollo de la coordinación se reconoce por dificultades motoras que afectan el rendimiento en actividades cotidianas y escolares; por tanto, se hace necesario un diagnóstico precoz para iniciar una intervención oportuna. Un cuestionario para diagnosticar es el Developmental coordination disorder questionnaire’07, DCDQ’07. Objetivo: realizar la traducción y adaptación transcultural al español del DCDQ’07. Materiales y métodos: tres traductores independientes tradujeron el cuestionario, clasificando sus ítems como equivalentes, con problemas en algunas palabras y sin equivalencia, y desde su equivalencia experiencial, semántica, conceptual e idiomática. Resultados: el artículo presenta los resultados preliminares de la investigación, la cual culminó su primera fase de traducción de los quince ítems del cuestionario. Ocho de ellos fueron clasificados como equivalentes, seis con problemas en algunas palabras y uno sin equivalencia. Diez ítems correspondieron a traducción por equivalencia experiencial, cuatro se clasificaron como equivalentes semánticas y uno se consideró con doble equivalencia. La autora del cuestionario original valoró positivamente la versión en español. La percepción de los padres frente al cuestionario fue positiva. Conclusiones: la mayoría de los ítems del cuestionario no tuvo dificultad en su traducción, facilitando su adaptación transcultural al español y la continuidad del proceso de validación y confiabilidad.
Resumo:
In a recent paper [P. Glaister, Conservative upwind difference schemes for compressible flows in a Duct, Comput. Math. Appl. 56 (2008) 1787–1796] numerical schemes based on a conservative linearisation are presented for the Euler equations governing compressible flows of an ideal gas in a duct of variable cross-section, and in [P. Glaister, Conservative upwind difference schemes for compressible flows of a real gas, Comput. Math. Appl. 48 (2004) 469–480] schemes based on this philosophy are presented for real gas flows with slab symmetry. In this paper we seek to extend these ideas to encompass compressible flows of real gases in a duct. This will incorporate the handling of additional terms arising out of the variable geometry and the non-ideal nature of the gas.
Resumo:
This article contrasts the sense in which those whom Bernard Williams called ‘political realists’ and John Rawls are committed to the idea that political philosophy has to be distinctively political. Distinguishing the realist critique of political moralism from debates over ideal and non-ideal theory, it is argued that Rawls is more realist than many realists realise, and that realists can learn more about how to make a distinctively political vision of how our life together should be organised from his theorising, although it also points to a worrying tendency among Rawlsians to reach for inappropriately moralised arguments. G. A. Cohen’s advocacy of socialism and the second season of HBO’s The Wire are used as examples to illustrate these points.
Resumo:
One of the key issues in e-learning environments is the possibility of creating and evaluating exercises. However, the lack of tools supporting the authoring and automatic checking of exercises for specifics topics (e.g., geometry) drastically reduces advantages in the use of e-learning environments on a larger scale, as usually happens in Brazil. This paper describes an algorithm, and a tool based on it, designed for the authoring and automatic checking of geometry exercises. The algorithm dynamically compares the distances between the geometric objects of the student`s solution and the template`s solution, provided by the author of the exercise. Each solution is a geometric construction which is considered a function receiving geometric objects (input) and returning other geometric objects (output). Thus, for a given problem, if we know one function (construction) that solves the problem, we can compare it to any other function to check whether they are equivalent or not. Two functions are equivalent if, and only if, they have the same output when the same input is applied. If the student`s solution is equivalent to the template`s solution, then we consider the student`s solution as a correct solution. Our software utility provides both authoring and checking tools to work directly on the Internet, together with learning management systems. These tools are implemented using the dynamic geometry software, iGeom, which has been used in a geometry course since 2004 and has a successful track record in the classroom. Empowered with these new features, iGeom simplifies teachers` tasks, solves non-trivial problems in student solutions and helps to increase student motivation by providing feedback in real time. (c) 2008 Elsevier Ltd. All rights reserved.
Resumo:
La recherche prend comme point de départ la dimension formative du mémoire de formation, considerée comme constitutive de l´écriture de soi, et cherche à problématiser cette dimension au tour du questionnement suivant: Comment le mémoire de formation devient-il un instrument de recherche-action-formation? Les analyses s´appuient sur les principes théoriques du paradigme anthropoformateur, proposé par Pineau (2005), les études réalisées par Passeggi (2006ª, 2006b, 2007, 2008ª, 2008b) sur les mémoires, les travaux de recherche de Nóvoa (1988, 1995), de Josso (2004), de Souza (2006) et de Fontana (2000), qui conçoivent la formação du point de vue de l´ apprenant. L´univers de la recherche s´est circonscrit à la situation de formation des éducateurs de la zone rurale, étudiants en Pédagogie dans le PROFORMAÇÃO (CAMEAM), à l´Univeristé de l´Etat du Rio Grande du Nord (UERN), pendant le second semestre de 2005. La recherche a croisé diférents types de démarche pour recueillir les données empiriques: l´observation du processus d´élaboration des mémoires; un questionnaire; des entretiens informels avec les enseignants en formation et avec les formateurs; et 09 mémoires, écrits par les participants de la recherche. Les analyses des données empiriques montrent que l´écriture des mémoires, en tant que démarche de recherche-action-formation, révelent que la dimension formative se dédouble en d´autres dimensions: etnosociologique, heuristique, herméneutique, social et afective, autopoiétique et politique. Dans la quête de soi (recherche), mise en oeuvre dans et par l´écriture (action), chaque narrateur construit de nouveaux sens à la vie et (re)signifient les représentations de soi (formation). Les résultats confirment la richesse et les potentialités du mémoire, même dans des conditions non ideal, ce qui permet d´affirmer as valeur travail académique important dans la formation des enseignants
Resumo:
Internet applications such as media streaming, collaborative computing and massive multiplayer are on the rise,. This leads to the need for multicast communication, but unfortunately group communications support based on IP multicast has not been widely adopted due to a combination of technical and non-technical problems. Therefore, a number of different application-layer multicast schemes have been proposed in recent literature to overcome the drawbacks. In addition, these applications often behave as both providers and clients of services, being called peer-topeer applications, and where participants come and go very dynamically. Thus, servercentric architectures for membership management have well-known problems related to scalability and fault-tolerance, and even peer-to-peer traditional solutions need to have some mechanism that takes into account member's volatility. The idea of location awareness distributes the participants in the overlay network according to their proximity in the underlying network allowing a better performance. Given this context, this thesis proposes an application layer multicast protocol, called LAALM, which takes into account the actual network topology in the assembly process of the overlay network. The membership algorithm uses a new metric, IPXY, to provide location awareness through the processing of local information, and it was implemented using a distributed shared and bi-directional tree. The algorithm also has a sub-optimal heuristic to minimize the cost of membership process. The protocol has been evaluated in two ways. First, through an own simulator developed in this work, where we evaluated the quality of distribution tree by metrics such as outdegree and path length. Second, reallife scenarios were built in the ns-3 network simulator where we evaluated the network protocol performance by metrics such as stress, stretch, time to first packet and reconfiguration group time