986 resultados para Bounded relative error
Resumo:
The aim of this study was to investigate the effects of knowledge of results (KR) frequency and task complexity on motor skill acquisition. The task consisted of throwing a bocha ball to place it as close as possible to the target ball. 120 students ages 11 to 73 years were assigned to one of eight experimental groups according to knowledge of results frequency (25, 50, 75, and 100%) and task complexity (simple and complex). Subjects performed 90 trials in the acquisition phase and 10 trials in the transfer test. The results showed that knowledge of results given at a frequency of 25% resulted in an inferior absolute error than 50% and inferior variable error than 50, 75, and 100 I frequencies, but no effect of task complexity was found.
Resumo:
When visual information is confined to one object plane, the emmetropization end-point is adjusted in accord with the corresponding incident optical vergence at the eye [Proceedings of the 7th International Conference on Myopia (2000) 113]. We now report the effect of adding extra visual information beyond the target plane. Visual conditions were controlled using a cone-lens system: black Maltese cross targets on white opaque backgrounds (OMX) were attached to the open faces of 2.5 cm translucent cones fitted with either 0, +25 or +40 D imaging lenses. An alternative target (TMX) was made by substituting the opaque target background for a transparent background, which allowed access to visual information beyond the target plane. The imaging devices were applied to 7-day-old chicks and worn for 4 days. Prior to this treatment, on day 2, some chicks underwent ciliary nerve section (CNS) to preclude accommodation. All treatments were monocular. Refractive errors and axial ocular dimensions were measured using retinoscopy and A-scan ultrasonography under halothane anesthesia. Treatment effects were specified as mean ( +/-S.D.) interocular differences. Eyes with the OMX/ + 40 D lens combination remained emmetropic ( +0.73 +/-3.57 D), consistent with the target plane being approximately conjugate with the retina. Switching to the TMX caused a hyperopic shift in refractive error ( + 3.78 +/- 3.41 D). This relative shift towards hyperopia in switching from the OMX to the TMX target also occurred for the other two lens powers. Thus, the OMX/ + 25 D lens induced myopia ( - 7.00 +/-5.88 D), corresponding to the imposed hyperopic defocus (target plane now imaged behind the retina), and switching to the TMX resulted in a reduction in myopia (-1.73 +/-5.36 D), The OMX/0 D lens combination produced the largest myopic shift, and here, switching to the TMX condition almost eliminated the myopic response (-15.50 +/-6.62 D cf. -0.56 +/-1.24 D). This relative hyperopic shift associated with switching from the OMX to the TMX target was eliminated by CNS surgery. Thus, the two CNS/TMX groups were both more myopic than the equivalent no CNS/TMX groups ( + 40 D lens: -2.66 +/-2.34 D; +25 D lens: -7.97 +/-6.87 D). When the visual information is restricted to one plane, incident optical vergence appears to direct emmetropization. Adding Visual information at other distances produces a shift in the end-point of ernmetropization in the direction of the added information. That these effects are dependent on the integrity of the accommodation system implies that accommodation plays a role in emmetropization and represents the first reported evidence of this kind. Published by Elsevier Science Ltd.
Resumo:
The choice of genotyping families vs unrelated individuals is a critical factor in any large-scale linkage disequilibrium (LD) study. The use of unrelated individuals for such studies is promising, but in contrast to family designs, unrelated samples do not facilitate detection of genotyping errors, which have been shown to be of great importance for LD and linkage studies and may be even more important in genotyping collaborations across laboratories. Here we employ some of the most commonly-used analysis methods to examine the relative accuracy of haplotype estimation using families vs unrelateds in the presence of genotyping error. The results suggest that even slight amounts of genotyping error can significantly decrease haplotype frequency and reconstruction accuracy, that the ability to detect such errors in large families is essential when the number/complexity of haplotypes is high (low LD/common alleles). In contrast, in situations of low haplotype complexity (high LD and/or many rare alleles) unrelated individuals offer such a high degree of accuracy that there is little reason for less efficient family designs. Moreover, parent-child trios, which comprise the most popular family design and the most efficient in terms of the number of founder chromosomes per genotype but which contain little information for error detection, offer little or no gain over unrelated samples in nearly all cases, and thus do not seem a useful sampling compromise between unrelated individuals and large families. The implications of these results are discussed in the context of large-scale LD mapping projects such as the proposed genome-wide haplotype map.
Resumo:
Trabalho final de Mestrado para obtenção do grau de Mestre em Engenharia de Electrónica e Telecomunicações
Resumo:
Economics is a social science which, therefore, focuses on people and on the decisions they make, be it in an individual context, or in group situations. It studies human choices, in face of needs to be fulfilled, and a limited amount of resources to fulfill them. For a long time, there was a convergence between the normative and positive views of human behavior, in that the ideal and predicted decisions of agents in economic models were entangled in one single concept. That is, it was assumed that the best that could be done in each situation was exactly the choice that would prevail. Or, at least, that the facts that economics needed to explain could be understood in the light of models in which individual agents act as if they are able to make ideal decisions. However, in the last decades, the complexity of the environment in which economic decisions are made and the limits on the ability of agents to deal with it have been recognized, and incorporated into models of decision making in what came to be known as the bounded rationality paradigm. This was triggered by the incapacity of the unboundedly rationality paradigm to explain observed phenomena and behavior. This thesis contributes to the literature in three different ways. Chapter 1 is a survey on bounded rationality, which gathers and organizes the contributions to the field since Simon (1955) first recognized the necessity to account for the limits on human rationality. The focus of the survey is on theoretical work rather than the experimental literature which presents evidence of actual behavior that differs from what classic rationality predicts. The general framework is as follows. Given a set of exogenous variables, the economic agent needs to choose an element from the choice set that is avail- able to him, in order to optimize the expected value of an objective function (assuming his preferences are representable by such a function). If this problem is too complex for the agent to deal with, one or more of its elements is simplified. Each bounded rationality theory is categorized according to the most relevant element it simplifes. Chapter 2 proposes a novel theory of bounded rationality. Much in the same fashion as Conlisk (1980) and Gabaix (2014), we assume that thinking is costly in the sense that agents have to pay a cost for performing mental operations. In our model, if they choose not to think, such cost is avoided, but they are left with a single alternative, labeled the default choice. We exemplify the idea with a very simple model of consumer choice and identify the concept of isofin curves, i.e., sets of default choices which generate the same utility net of thinking cost. Then, we apply the idea to a linear symmetric Cournot duopoly, in which the default choice can be interpreted as the most natural quantity to be produced in the market. We find that, as the thinking cost increases, the number of firms thinking in equilibrium decreases. More interestingly, for intermediate levels of thinking cost, an equilibrium in which one of the firms chooses the default quantity and the other best responds to it exists, generating asymmetric choices in a symmetric model. Our model is able to explain well-known regularities identified in the Cournot experimental literature, such as the adoption of different strategies by players (Huck et al. , 1999), the inter temporal rigidity of choices (Bosch-Dom enech & Vriend, 2003) and the dispersion of quantities in the context of di cult decision making (Bosch-Dom enech & Vriend, 2003). Chapter 3 applies a model of bounded rationality in a game-theoretic set- ting to the well-known turnout paradox in large elections, pivotal probabilities vanish very quickly and no one should vote, in sharp contrast with the ob- served high levels of turnout. Inspired by the concept of rhizomatic thinking, introduced by Bravo-Furtado & Côrte-Real (2009a), we assume that each per- son is self-delusional in the sense that, when making a decision, she believes that a fraction of the people who support the same party decides alike, even if no communication is established between them. This kind of belief simplifies the decision of the agent, as it reduces the number of players he believes to be playing against { it is thus a bounded rationality approach. Studying a two-party first-past-the-post election with a continuum of self-delusional agents, we show that the turnout rate is positive in all the possible equilibria, and that it can be as high as 100%. The game displays multiple equilibria, at least one of which entails a victory of the bigger party. The smaller one may also win, provided its relative size is not too small; more self-delusional voters in the minority party decreases this threshold size. Our model is able to explain some empirical facts, such as the possibility that a close election leads to low turnout (Geys, 2006), a lower margin of victory when turnout is higher (Geys, 2006) and high turnout rates favoring the minority (Bernhagen & Marsh, 1997).
Resumo:
For the last two decades, the primary instruments for UK regional policy have been discretionary subsidies. Such aid is targeted at “additional” projects - projects that would not have been implemented without the subsidy - and the subsidy should be the minimum necessary for the project to proceed. Discretionary subsidies are thought to be more efficient than automatic subsidies, where many of the aided projects are non-additional and all projects receive the same subsidy rate. The present paper builds on Swales (1995) and Wren (2007a) to compare three subsidy schemes: an automatic scheme and two types of discretionary scheme, one with accurate appraisal and the other with appraisal error. These schemes are assessed on their expected welfare impacts. The particular focus is the reduction in welfare gain imposed by the interaction of appraisal error and the requirements for accountability. This is substantial and difficult to detect with conventional evaluation techniques.
Resumo:
I develop a model of endogenous bounded rationality due to search costs, arising implicitly from the problems complexity. The decision maker is not required to know the entire structure of the problem when making choices but can think ahead, through costly search, to reveal more of it. However, the costs of search are not assumed exogenously; they are inferred from revealed preferences through her choices. Thus, bounded rationality and its extent emerge endogenously: as problems become simpler or as the benefits of deeper search become larger relative to its costs, the choices more closely resemble those of a rational agent. For a fixed decision problem, the costs of search will vary across agents. For a given decision maker, they will vary across problems. The model explains, therefore, why the disparity, between observed choices and those prescribed under rationality, varies across agents and problems. It also suggests, under reasonable assumptions, an identifying prediction: a relation between the benefits of deeper search and the depth of the search. As long as calibration of the search costs is possible, this can be tested on any agent-problem pair. My approach provides a common framework for depicting the underlying limitations that force departures from rationality in different and unrelated decision-making situations. Specifically, I show that it is consistent with violations of timing independence in temporal framing problems, dynamic inconsistency and diversification bias in sequential versus simultaneous choice problems, and with plausible but contrasting risk attitudes across small- and large-stakes gambles.
Resumo:
Summary : Forensic science - both as a source of and as a remedy for error potentially leading to judicial error - has been studied empirically in this research. A comprehensive literature review, experimental tests on the influence of observational biases in fingermark comparison, and semistructured interviews with heads of forensic science laboratories/units in Switzerland and abroad were the tools used. For the literature review, some of the areas studied are: the quality of forensic science work in general, the complex interaction between science and law, and specific propositions as to error sources not directly related to the interaction between law and science. A list of potential error sources all the way from the crime scene to the writing of the report has been established as well. For the empirical tests, the ACE-V (Analysis, Comparison, Evaluation, and Verification) process of fingermark comparison was selected as an area of special interest for the study of observational biases, due to its heavy reliance on visual observation and recent cases of misidentifications. Results of the tests performed with forensic science students tend to show that decision-making stages are the most vulnerable to stimuli inducing observational biases. For the semi-structured interviews, eleven senior forensic scientists answered questions on several subjects, for example on potential and existing error sources in their work, of the limitations of what can be done with forensic science, and of the possibilities and tools to minimise errors. Training and education to augment the quality of forensic science have been discussed together with possible solutions to minimise the risk of errors in forensic science. In addition, the time that samples of physical evidence are kept has been determined as well. Results tend to show considerable agreement on most subjects among the international participants. Their opinions on possible explanations for the occurrence of such problems and the relative weight of such errors in the three stages of crime scene, laboratory, and report writing, disagree, however, with opinions widely represented in existing literature. Through the present research it was therefore possible to obtain a better view of the interaction of forensic science and judicial error to propose practical recommendations to minimise their occurrence. Résumé : Les sciences forensiques - considérés aussi bien comme source de que comme remède à l'erreur judiciaire - ont été étudiées empiriquement dans cette recherche. Une revue complète de littérature, des tests expérimentaux sur l'influence du biais de l'observation dans l'individualisation de traces digitales et des entretiens semi-directifs avec des responsables de laboratoires et unités de sciences forensiques en Suisse et à l'étranger étaient les outils utilisés. Pour la revue de littérature, quelques éléments étudies comprennent: la qualité du travail en sciences forensiques en général, l'interaction complexe entre la science et le droit, et des propositions spécifiques quant aux sources d'erreur pas directement liées à l'interaction entre droit et science. Une liste des sources potentielles d'erreur tout le long du processus de la scène de crime à la rédaction du rapport a également été établie. Pour les tests empiriques, le processus d'ACE-V (analyse, comparaison, évaluation et vérification) de l'individualisation de traces digitales a été choisi comme un sujet d'intérêt spécial pour l'étude des effets d'observation, due à son fort recours à l'observation visuelle et dû à des cas récents d'identification erronée. Les résultats des tests avec des étudiants tendent à prouver que les étapes de prise de décision sont les plus vulnérables aux stimuli induisant des biais d'observation. Pour les entretiens semi-structurés, onze forensiciens ont répondu à des questions sur des sujets variés, par exemple sur des sources potentielles et existantes d'erreur dans leur travail, des limitations de ce qui peut être fait en sciences forensiques, et des possibilités et des outils pour réduire au minimum ses erreurs. La formation et l'éducation pour augmenter la qualité des sciences forensiques ont été discutées ainsi que les solutions possibles pour réduire au minimum le risque d'erreurs en sciences forensiques. Le temps que des échantillons sont gardés a été également déterminé. En général, les résultats tendent à montrer un grand accord sur la plupart des sujets abordés pour les divers participants internationaux. Leur avis sur des explications possibles pour l'occurrence de tels problèmes et sur le poids relatif de telles erreurs dans les trois étapes scène de crime;', laboratoire et rédaction de rapports est cependant en désaccord avec les avis largement représentés dans la littérature existante. Par cette recherche il était donc possible d'obtenir une meilleure vue de l'interaction des sciences forensiques et de l'erreur judiciaire afin de proposer des recommandations pratiques pour réduire au minimum leur occurrence. Zusammenfassung : Forensische Wissenschaften - als Ursache und als Hilfsmittel gegen Fehler, die möglicherweise zu Justizirrtümern führen könnten - sind hier empirisch erforscht worden. Die eingestzten Methoden waren eine Literaturübersicht, experimentelle Tests über den Einfluss von Beobachtungseffekten (observer bias) in der Individualisierung von Fingerabdrücken und halbstandardisierte Interviews mit Verantwortlichen von kriminalistischen Labors/Diensten in der Schweiz und im Ausland. Der Literaturüberblick umfasst unter anderem: die Qualität der kriminalistischen Arbeit im Allgemeinen, die komplizierte Interaktion zwischen Wissenschaft und Recht und spezifische Fehlerquellen, welche nicht direkt auf der Interaktion von Recht und Wissenschaft beruhen. Eine Liste möglicher Fehlerquellen vom Tatort zum Rapportschreiben ist zudem erstellt worden. Für die empirischen Tests wurde der ACE-V (Analyse, Vergleich, Auswertung und Überprüfung) Prozess in der Fingerabdruck-Individualisierung als speziell interessantes Fachgebiet für die Studie von Beobachtungseffekten gewählt. Gründe sind die Wichtigkeit von visuellen Beobachtungen und kürzliche Fälle von Fehlidentifizierungen. Resultate der Tests, die mit Studenten durchgeführt wurden, neigen dazu Entscheidungsphasen als die anfälligsten für Stimuli aufzuzeigen, die Beobachtungseffekte anregen könnten. Für die halbstandardisierten Interviews beantworteten elf Forensiker Fragen über Themen wie zum Beispiel mögliche und vorhandene Fehlerquellen in ihrer Arbeit, Grenzen der forensischen Wissenschaften und Möglichkeiten und Mittel um Fehler zu verringern. Wie Training und Ausbildung die Qualität der forensischen Wissenschaften verbessern können ist zusammen mit möglichen Lösungen zur Fehlervermeidung im selben Bereich diskutiert worden. Wie lange Beweismitten aufbewahrt werden wurde auch festgehalten. Resultate neigen dazu, für die meisten Themen eine grosse Übereinstimmung zwischen den verschiedenen internationalen Teilnehmern zu zeigen. Ihre Meinungen über mögliche Erklärungen für das Auftreten solcher Probleme und des relativen Gewichts solcher Fehler in den drei Phasen Tatort, Labor und Rapportschreiben gehen jedoch mit den Meinungen, welche in der Literatur vertreten werden auseinander. Durch diese Forschungsarbeit war es folglich möglich, ein besseres Verständnis der Interaktion von forensischen Wissenschaften und Justizirrtümer zu erhalten, um somit praktische Empfehlungen vorzuschlagen, welche diese verringern. Resumen : Esta investigación ha analizado de manera empírica el rol de las ciencias forenses como fuente y como remedio de potenciales errores judiciales. La metodología empleada consistió en una revisión integral de la literatura, en una serie de experimentos sobre la influencia de los sesgos de observación en la individualización de huellas dactilares y en una serie de entrevistas semiestructuradas con jefes de laboratorios o unidades de ciencias forenses en Suiza y en el extranjero. En la revisión de la literatura, algunas de las áreas estudiadas fueron: la calidad del trabajo en ciencias forenses en general, la interacción compleja entre la ciencia y el derecho, así como otras fuentes de error no relacionadas directamente con la interacción entre derecho y ciencia. También se ha establecido una lista exhaustiva de las fuentes potenciales de error desde la llegada a la escena del crimen a la redacción del informe. En el marco de los tests empíricos, al analizar los sesgos de observación dedicamos especial interés al proceso de ACE-V (análisis, comparación, evaluación y verificación) para la individualización de huellas dactilares puesto que este reposa sobre la observación visual y ha originado varios casos recientes de identificaciones erróneas. Los resultados de las experimentaciones realizadas con estudiantes sugieren que las etapas en las que deben tornarse decisiones son las más vulnerables a lös factores que pueden generar sesgos de observación. En el contexto de las entrevistas semi-estructuradas, once científicos forenses de diversos países contestaron preguntas sobre varios temas, incluyendo las fuentes potenciales y existehtes de error en su trabajo, las limitaciones propias a las ciencias forenses, las posibilidades de reducir al mínimo los errores y las herramientas que podrían ser utilizadas para ello. Se han sugerido diversas soluciones para alcanzar este objetivo, incluyendo el entrenamiento y la educación para aumentar la calidad de las ciencias forenses. Además, se ha establecido el periodo de conservación de las muestras judiciales. Los resultados apuntan a un elevado grado de consenso entre los entrevistados en la mayoría de los temas. Sin embargo, sus opiniones sobre las posibles causas de estos errores y su importancia relativa en las tres etapas de la investigación -la escena del crimen, el laboratorio y la redacción de informe- discrepan con las que predominan ampliamente en la literatura actual. De este modo, esta investigación nos ha permitido obtener una mejor imagen de la interacción entre ciencias forenses y errores judiciales, y comenzar a formular una serie de recomendaciones prácticas para reducirlos al minimo.
Resumo:
Exposure to solar ultraviolet (UV) radiation is the main causative factor for skin cancer. UV exposure depends on environmental and individual factors, but individual exposure data remain scarce. While ground UV irradiance is monitored via different techniques, it is difficult to translate such observations into human UV exposure or dose because of confounding factors. A multi-disciplinary collaboration developed a model predicting the dose and distribution of UV exposure on the basis of ground irradiation and morphological data. Standard 3D computer graphics techniques were adapted to develop a simulation tool that estimates solar exposure of a virtual manikin depicted as a triangle mesh surface. The amount of solar energy received by various body locations is computed for direct, diffuse and reflected radiation separately. Dosimetric measurements obtained in field conditions were used to assess the model performance. The model predicted exposure to solar UV adequately with a symmetric mean absolute percentage error of 13% and half of the predictions within 17% range of the measurements. Using this tool, solar UV exposure patterns were investigated with respect to the relative contribution of the direct, diffuse and reflected radiation. Exposure doses for various body parts and exposure scenarios of a standing individual were assessed using erythemally-weighted UV ground irradiance data measured in 2009 at Payerne, Switzerland as input. For most anatomical sites, mean daily doses were high (typically 6.2-14.6 Standard Erythemal Dose, SED) and exceeded recommended exposure values. Direct exposure was important during specific periods (e. g. midday during summer), but contributed moderately to the annual dose, ranging from 15 to 24% for vertical and horizontal body parts, respectively. Diffuse irradiation explained about 80% of the cumulative annual exposure dose.
Resumo:
PURPOSE: Afferent asymmetry of visual function is detectable in both normal and pathologic conditions. With a computerized test, we assessed the variability in measuring afferent asymmetry of the pupillary light reflex, that is, the relative afferent pupillary defect. METHODS: In ten normal subjects, pupillary responses to an alternating light stimulus were recorded with computerized infrared pupillography. The relative afferent pupillary defect for each test was determined by using a new computer analysis. The 95% confidence interval of each determination of relative afferent pupillary defect was used to represent the short-term fluctuation in its measurement. To optimize the test for clinical use, we studied the influence of stimulus intensity, duration, and number on the variability of the relative afferent pupillary defect. RESULTS: When the relative afferent pupillary defect was based on only a few light alternations (stimulus pairs), there was excessive variability in its measurement (95% confidence interval > 0.5 log units). With approximately 200 stimulus pairs, the 95% confidence interval was reduced to less than 0.1 log unit (relative afferent pupillary defect +/- 0.05 log unit). Also, there was less variability when the dark interval between alternating light stimulation was less than one second. CONCLUSIONS: Computerized infrared pupillography can standardize the alternating light test and minimize the error in quantifying a relative afferent pupillary defect. A reproducible relative afferent pupillary defect measurement is desirable for defining afferent injury and following the course of disease.
Resumo:
Møller-Plesset (MP2) and Becke-3-Lee-Yang-Parr (B3LYP) calculations have been used to compare the geometrical parameters, hydrogen-bonding properties, vibrational frequencies and relative energies for several X- and X+ hydrogen peroxide complexes. The geometries and interaction energies were corrected for the basis set superposition error (BSSE) in all the complexes (1-5), using the full counterpoise method, yielding small BSSE values for the 6-311 + G(3df,2p) basis set used. The interaction energies calculated ranged from medium to strong hydrogen-bonding systems (1-3) and strong electrostatic interactions (4 and 5). The molecular interactions have been characterized using the atoms in molecules theory (AIM), and by the analysis of the vibrational frequencies. The minima on the BSSE-counterpoise corrected potential-energy surface (PES) have been determined as described by S. Simón, M. Duran, and J. J. Dannenberg, and the results were compared with the uncorrected PES
Resumo:
We describe a simple method to automate the geometric optimization of molecular orbital calculations of supermolecules on potential surfaces that are corrected for basis set superposition error using the counterpoise (CP) method. This method is applied to the H-bonding complexes HF/HCN, HF/H2O, and HCCH/H2O using the 6-31G(d,p) and D95 + + (d,p) basis sets at both the Hartree-Fock and second-order Møller-Plesset levels. We report the interaction energies, geometries, and vibrational frequencies of these complexes on the CP-optimized surfaces; and compare them with similar values calculated using traditional methods, including the (more traditional) single point CP correction. Upon optimization on the CP-corrected surface, the interaction energies become more negative (before vibrational corrections) and the H-bonding stretching vibrations decrease in all cases. The extent of the effects vary from extremely small to quite large depending on the complex and the calculational method. The relative magnitudes of the vibrational corrections cannot be predicted from the H-bond stretching frequencies alone
Resumo:
The effectiveness of decision rules depends on characteristics of bothrules and environments. A theoretical analysis of environments specifiesthe relative predictive accuracies of the lexicographic rule 'take-the-best'(TTB) and other simple strategies for binary choice. We identify threefactors: how the environment weights variables; characteristics of choicesets; and error. For cases involving from three to five binary cues, TTBis effective across many environments. However, hybrids of equal weights(EW) and TTB models are more effective as environments become morecompensatory. In the presence of error, TTB and similar models do not predictmuch better than a naïve model that exploits dominance. We emphasizepsychological implications and the need for more complete theories of theenvironment that include the role of error.
Resumo:
Exposure to solar ultraviolet (UV) radiation is the main causative factor for skin cancer. UV exposure depends on environmental and individual factors, but individual exposure data remain scarce. UV irradiance is monitored via different techniques including ground measurements and satellite observations. However it is difficult to translate such observations into human UV exposure or dose because of confounding factors (shape of the exposed surface, shading, behavior, etc.) A collaboration between public health institutions, a meteorological office and an institute specialized in computing techniques developed a model predicting the dose and distribution of UV exposure on the basis of ground irradiation and morphological data. Standard 3D computer graphics techniques were adapted to develop this tool, which estimates solar exposure of a virtual manikin depicted as a triangle mesh surface. The amount of solar energy received by various body locations is computed for direct, diffuse and reflected radiation separately. The radiation components are deduced from corresponding measurements of UV irradiance, and the related UV dose received by each triangle of the virtual manikin is computed accounting for shading by other body parts and eventual protection measures. The model was verified with dosimetric measurements (n=54) in field conditions using a foam manikin as surrogate for an exposed individual. Dosimetric results were compared to the model predictions. The model predicted exposure to solar UV adequately. The symmetric mean absolute percentage error was 13%. Half of the predictions were within 17% range of the measurements. This model allows assessing outdoor occupational and recreational UV exposures, without necessitating time-consuming individual dosimetry, with numerous potential uses in skin cancer prevention and research. Using this tool, we investigated solar UV exposure patterns with respect to the relative contribution of the direct, diffuse and reflected radiation. We assessed exposure doses for various body parts and exposure scenarios of a standing individual (static and dynamic postures). As input, the model used erythemally-weighted ground irradiance data measured in 2009 at Payerne, Switzerland. A year-round daily exposure (8 am to 5 pm) without protection was assumed. For most anatomical sites, mean daily doses were high (typically 6.2-14.6 SED) and exceeded recommended exposure values. Direct exposure was important during specific periods (e.g. midday during summer), but contributed moderately to the annual dose, ranging from 15 to 24% for vertical and horizontal body parts, respectively. Diffuse irradiation explained about 80% of the cumulative annual exposure dose. Acute diffuse exposures were also obtained for cloudy summer days. The importance of diffuse UV radiation should not be underestimated when advocating preventive measures. Messages focused on avoiding acute direct exposures may be of limited efficiency to prevent skin cancers associated with chronic exposure (e.g., squamous cell carcinomas).
Resumo:
Este documento estima modelos lineales y no-lineales de corrección de errores para los precios spot de cuatro tipos de café. En concordancia con las leyes económicas, se encuentra evidencia que cuando los precios están por encima de su nivel de equilibrio, retornan a éste mas lentamente que cuando están por debajo. Esto puede reflejar el hecho que, en el corto plazo, para los países productores de café es mas fácil restringir la oferta para incrementar precios, que incrementarla para reducirlos. Además, se encuentra evidencia que el ajuste es más rápido cuando las desviaciones del equilibrio son mayores. Los pronósticos que se obtienen a partir de los modelos de corrección de errores no lineales y asimétricos considerados en el trabajo, ofrecen una leve mejoría cuando se comparan con los pronósticos que resultan de un modelo de paseo aleatorio.