825 resultados para Kangassalo, Raija: Mastering the question
Resumo:
The problem of using information available from one variable X to make inferenceabout another Y is classical in many physical and social sciences. In statistics this isoften done via regression analysis where mean response is used to model the data. Onestipulates the model Y = µ(X) +ɛ. Here µ(X) is the mean response at the predictor variable value X = x, and ɛ = Y - µ(X) is the error. In classical regression analysis, both (X; Y ) are observable and one then proceeds to make inference about the mean response function µ(X). In practice there are numerous examples where X is not available, but a variable Z is observed which provides an estimate of X. As an example, consider the herbicidestudy of Rudemo, et al. [3] in which a nominal measured amount Z of herbicide was applied to a plant but the actual amount absorbed by the plant X is unobservable. As another example, from Wang [5], an epidemiologist studies the severity of a lung disease, Y , among the residents in a city in relation to the amount of certain air pollutants. The amount of the air pollutants Z can be measured at certain observation stations in the city, but the actual exposure of the residents to the pollutants, X, is unobservable and may vary randomly from the Z-values. In both cases X = Z+error: This is the so called Berkson measurement error model.In more classical measurement error model one observes an unbiased estimator W of X and stipulates the relation W = X + error: An example of this model occurs when assessing effect of nutrition X on a disease. Measuring nutrition intake precisely within 24 hours is almost impossible. There are many similar examples in agricultural or medical studies, see e.g., Carroll, Ruppert and Stefanski [1] and Fuller [2], , among others. In this talk we shall address the question of fitting a parametric model to the re-gression function µ(X) in the Berkson measurement error model: Y = µ(X) + ɛ; X = Z + η; where η and ɛ are random errors with E(ɛ) = 0, X and η are d-dimensional, and Z is the observable d-dimensional r.v.
Resumo:
The Honda workers’ strike in 2010 attracted world wide attention. It was one of thousands of labor disputes that happen every year in China, but it was the first major calling for the right of workers to represent themselves in collective bargaining. The question of representation is therefore the main topic of the book. The various contributors to this volume share the view that the Chinese party-state takes the protest against social inequality seriously. It has enacted many laws aimed at channeling dissatisfaction into safe channels. The implementation of these laws, however, lags behind and these laws do not include the right of freedom of association. Without this right, super-exploitation will persist and the system of labor relations will remain prone to eruptive forms of protest. The first part of the book provides an overview of the economic context of Chinese labor relations, the transformation of class-relations, the evolution of labor law, and government policies intended to set a wage floor. Based on extensive field research, the second part looks at the evolution of labor relations at the industry level. In the third part, the focus shifts to the Corporate Social Responsibility agenda in China. The final part looks at the connection between land reform and social inequality.
Resumo:
To various degrees, insects in nature adapt to and live with two fundamental environmental rhythms around them: (1) the daily rhythm of light and dark, and (2) the yearly seasonal rhythm of the changing photoperiod (length of light per day). It is hypothesized that two biological clocks evolved in organisms on earth which allow them to harmonize successfully with the two environmental rhythms: (1) the circadian clock, which orchestrates circadian rhythms in physiology and behavior, and (2) the photoperiodic clock, which allows for physiological adaptations to changes in photoperiod during the course of the year (insect photoperiodism). The circadian rhythm is endogenous and continues in constant conditions, while photoperiodism requires specific light inputs of a minimal duration. Output pathways from both clocks control neurosecretory cells which regulate growth and reproduction. This dissertation focuses on the question whether different photoperiods change the network and physiology of the circadian clock of an originally equatorial cockroach species. It is assumed that photoperiod-dependent plasticity of the cockroach circadian clock allows for adaptations in physiology and behavior without the need for a separate photoperiodic clock circuit. The Madeira cockroach Rhyparobia maderae is a well established circadian clock model system. Lesion and transplantation studies identified the accessory medulla (aMe), a small neuropil with about 250 neurons, as the cockroach circadian pacemaker. Among them, the pigment-dispersing factor immunoreactive (PDF-ir) neurons anterior to the aMe (aPDFMes) play a key role as inputs to and outputs of the circadian clock system. The aim of my doctoral thesis was to examine whether and how different photoperiods modify the circadian clock system. With immunocytochemical studies, three-dimensional (3D) reconstruction, standardization and Ca2+-imaging technique, my studies revealed that raising cockroaches in different photoperiods changed the neuronal network of the circadian clock (Wei and Stengl, 2011). In addition, different photoperiods affected the physiology of single, isolated circadian pacemaker neurons. This thesis provides new evidence for the involvement of the circadian clock in insect photoperiodism. The data suggest that the circadian pacemaker system of the Madeira cockroach has the plasticity and potential to allow for physiological adaptations to different photoperiods. Therefore, it may express also properties of a photoperiodic clock.
Resumo:
There have being increasing debate on the prospects of biofuel becoming the next best alternative to solving the problem of CO2 emission and the escalating fuel prices, but the question is whether this assertion is true and also if it comes without any cost to pay. This paper seeks to find out whether this much praised alternative to solving these problems is a better option or another way for the developed countries to find more areas where they could get cheap land, labour and raw materials for the production of biofuel. This will focus mainly on some effects the growing biofuel production has on food security, livelihood of people, the environment and some land conflicts developing as a result of land grabbing for biofuel production in the developing countries.
Resumo:
Contemporary food production, given the degree of technology being applied in it and the present state of scientific knowledge, should be able to feed the world. Corresponding statistics show that in fact the volumes of modern food production confirm this statement. Yet, the present nutritional situation across the globe leaves much to be desired: on the one hand the numbers of undernourished and malnourished people are still high and even growing in some regions, and on the other hand there is an increasing number of overweight and obese people who are experiencing (or are at risk of) adverse health impacts as consequences. The question arises how this situation is possible given the present state of food production and knowledge, and also in terms of nutrition basics when talking about the latter. When arguing about the main causes of the present situation with nutrition across the globe, it is the modern food system with its distortions that is often criticised with emphasis placed on inappropriate food distribution as one of the key problems. However it is not only food distribution that shapes inequalities in terms of food availability and accessibility – there is a number of other factors contributing to this situation including political influences. Each of the drivers of the present situation might affect more than one part and have outcomes in different dimensions. Therefore it makes sense to apply a holistic approach when viewing the modern food system, embracing all the elements and existing relationships between them for this will facilitate taking appropriate actions in order to target the desired outcome in the best possible way. Applying a systematic approach and linking various elements with corresponding interactions among them allows for picturing all the possible outcomes and hence finding the way for a better solution on global level – a solution to the present problem with nutritional disbalance across the globe.
Resumo:
In der psycholinguistischen Forschung ist die Annahme weitverbreitet, dass die Bewertung von Informationen hinsichtlich ihres Wahrheitsgehaltes oder ihrer Plausibilität (epistemische Validierung; Richter, Schroeder & Wöhrmann, 2009) ein strategischer, optionaler und dem Verstehen nachgeschalteter Prozess ist (z.B. Gilbert, 1991; Gilbert, Krull & Malone, 1990; Gilbert, Tafarodi & Malone, 1993; Herbert & Kübler, 2011). Eine zunehmende Anzahl an Studien stellt dieses Zwei-Stufen-Modell von Verstehen und Validieren jedoch direkt oder indirekt in Frage. Insbesondere Befunde zu Stroop-artigen Stimulus-Antwort-Kompatibilitätseffekten, die auftreten, wenn positive und negative Antworten orthogonal zum aufgaben-irrelevanten Wahrheitsgehalt von Sätzen abgegeben werden müssen (z.B. eine positive Antwort nach dem Lesen eines falschen Satzes oder eine negative Antwort nach dem Lesen eines wahren Satzes; epistemischer Stroop-Effekt, Richter et al., 2009), sprechen dafür, dass Leser/innen schon beim Verstehen eine nicht-strategische Überprüfung der Validität von Informationen vornehmen. Ausgehend von diesen Befunden war das Ziel dieser Dissertation eine weiterführende Überprüfung der Annahme, dass Verstehen einen nicht-strategischen, routinisierten, wissensbasierten Validierungsprozesses (epistemisches Monitoring; Richter et al., 2009) beinhaltet. Zu diesem Zweck wurden drei empirische Studien mit unterschiedlichen Schwerpunkten durchgeführt. Studie 1 diente der Untersuchung der Fragestellung, ob sich Belege für epistemisches Monitoring auch bei Informationen finden lassen, die nicht eindeutig wahr oder falsch, sondern lediglich mehr oder weniger plausibel sind. Mithilfe des epistemischen Stroop-Paradigmas von Richter et al. (2009) konnte ein Kompatibilitätseffekt von aufgaben-irrelevanter Plausibilität auf die Latenzen positiver und negativer Antworten in zwei unterschiedlichen experimentellen Aufgaben nachgewiesen werden, welcher dafür spricht, dass epistemisches Monitoring auch graduelle Unterschiede in der Übereinstimmung von Informationen mit dem Weltwissen berücksichtigt. Darüber hinaus belegen die Ergebnisse, dass der epistemische Stroop-Effekt tatsächlich auf Plausibilität und nicht etwa auf der unterschiedlichen Vorhersagbarkeit von plausiblen und unplausiblen Informationen beruht. Das Ziel von Studie 2 war die Prüfung der Hypothese, dass epistemisches Monitoring keinen evaluativen Mindset erfordert. Im Gegensatz zu den Befunden anderer Autoren (Wiswede, Koranyi, Müller, Langner, & Rothermund, 2013) zeigte sich in dieser Studie ein Kompatibilitätseffekt des aufgaben-irrelevanten Wahrheitsgehaltes auf die Antwortlatenzen in einer vollständig nicht-evaluativen Aufgabe. Die Ergebnisse legen nahe, dass epistemisches Monitoring nicht von einem evaluativen Mindset, möglicherweise aber von der Tiefe der Verarbeitung abhängig ist. Studie 3 beleuchtete das Verhältnis von Verstehen und Validieren anhand einer Untersuchung der Online-Effekte von Plausibilität und Vorhersagbarkeit auf Augenbewegungen beim Lesen kurzer Texte. Zusätzlich wurde die potentielle Modulierung dieser Effeke durch epistemische Marker, die die Sicherheit von Informationen anzeigen (z.B. sicherlich oder vielleicht), untersucht. Entsprechend der Annahme eines schnellen und nicht-strategischen epistemischen Monitoring-Prozesses zeigten sich interaktive Effekte von Plausibilität und dem Vorhandensein epistemischer Marker auf Indikatoren früher Verstehensprozesse. Dies spricht dafür, dass die kommunizierte Sicherheit von Informationen durch den Monitoring-Prozess berücksichtigt wird. Insgesamt sprechen die Befunde gegen eine Konzeptualisierung von Verstehen und Validieren als nicht-überlappenden Stufen der Informationsverarbeitung. Vielmehr scheint eine Bewertung des Wahrheitsgehalts oder der Plausibilität basierend auf dem Weltwissen – zumindest in gewissem Ausmaß – eine obligatorische und nicht-strategische Komponente des Sprachverstehens zu sein. Die Bedeutung der Befunde für aktuelle Modelle des Sprachverstehens und Empfehlungen für die weiterführende Forschung zum Vehältnis von Verstehen und Validieren werden aufgezeigt.
Resumo:
The conceptual component of this work is about "reference surfaces'' which are the dual of reference frames often used for shape representation purposes. The theoretical component of this work involves the question of whether one can find a unique (and simple) mapping that aligns two arbitrary perspective views of an opaque textured quadric surface in 3D, given (i) few corresponding points in the two views, or (ii) the outline conic of the surface in one view (only) and few corresponding points in the two views. The practical component of this work is concerned with applying the theoretical results as tools for the task of achieving full correspondence between views of arbitrary objects.
Resumo:
Traditionally, we've focussed on the question of how to make a system easy to code the first time, or perhaps on how to ease the system's continued evolution. But if we look at life cycle costs, then we must conclude that the important question is how to make a system easy to operate. To do this we need to make it easy for the operators to see what's going on and to then manipulate the system so that it does what it is supposed to. This is a radically different criterion for success. What makes a computer system visible and controllable? This is a difficult question, but it's clear that today's modern operating systems with nearly 50 million source lines of code are neither. Strikingly, the MIT Lisp Machine and its commercial successors provided almost the same functionality as today's mainstream sytsems, but with only 1 Million lines of code. This paper is a retrospective examination of the features of the Lisp Machine hardware and software system. Our key claim is that by building the Object Abstraction into the lowest tiers of the system, great synergy and clarity were obtained. It is our hope that this is a lesson that can impact tomorrow's designs. We also speculate on how the spirit of the Lisp Machine could be extended to include a comprehensive access control model and how new layers of abstraction could further enrich this model.
Resumo:
The Introductory Lecture is a discussion about "What is the Web". It involves lots of calling out TLAs and writing them on the blackboard, dividing things into servers, clients, protocols, formats, and the punchline is that the one unique and novel thing about the web is the hypertext link. This follows naturally into the Web architecture - the answer to the question "what is the web".
Resumo:
In 'Privacy and Politics', Kieron O'Hara discusses the relation of the political philosophy of privacy to technical aspects in Web development. Despite a vigorous debate, the concept remains ambiguous, and a series of types of privacy is defined: epistemological, spatial, ideological, decisional and economic. Each of these has a different meaning in the online environment, and will be defended by different measures. The question of whether privacy is a right is raised, and generational differences in attitude discussed, alongside the issue of whether privacy should be protected in advance, via a consent model, or retrospectively via increased transparency and accountability. Finally, reasons both theoretical and practical for ranking privacy below other values (such as security, efficiency or benefits for the wider community) are discussed.
Resumo:
In this paper we seriously entertain the question, “Is maternal deprivation the root of all evil?” Our consideration of this question is broken down into three parts. In the fi rst part, we discuss the nature of evil, focusing in particular on the legal concept of depravity. In the second part, we discuss the nurture of evil, focusing in particular on the common developmental trajectory seen in those who are depraved. In the third part, we discuss the roots of evil, focusing in particular on the animal and human research regarding maternal deprivation. Our conclusion is that maternal deprivation may actually be the root of all evil, but only because depraved individuals have been deprived of normative maternal care, which is the cradle of our humanity.
Resumo:
The goal of this research was to identify predic- tive psychosocial factors of the subjective quality of life in a group of 60 people, with ages between 19 and 57, from both sexes, included in the program of demobilization and social inclusion of the Pro- grama de la Alta Consejería para la Reintegración Social y Económica de Personas y Grupos Alzados en Armas en Colombia. this research was a predic- tive correlational descriptive study. the Question- naire of optimism/Pessimism was used to assess the optimist or pessimist trend, and, for assess the quality of life, these strategies were combined: a home visit to value the objective quality of life, the Analogous scale of subjective Quality of Life to value satisfaction and well-being, and a general format to collect socio-demographic and juridical information. Results show that some variables as perceived health, optimism, educational level, re- ligious believes, objective quality of life, type of demobilization and years spent in the armed group operating outside the law, are associated to better levels of perceived quality of life. The findings and limitations of the study are discussed.
Resumo:
In the education field, the question for the holistic formation is continuous and controversial. Moreover, with the obvious changes in the global knowledge production, apprehension and transmission, is crucial asking for the role of the education in the changes of the individual toward autonomy and take decisions in relationship with the educational process and the responsibility like a person sharing with knowledge like an issue of social development. In this context, this paper, presents results of an investigation made on 1995, about the recognition value like a methodology proposal of learning quality, for consider their propositions to be in force into an educational structure.
Resumo:
Resumen tomado de la publicación. Con el apoyo económico del departamento MIDE de la UNED
Resumo:
“I’m all lost in the supermarket. I can no longer shop happily. I came in here for the special offer. A guaranteed personality”. The song by The Clash, released in 1979, “Lost in the Supermarket” describes the protagonist struggle to deal with an increasingly commercialized society and the depersonalization of the world around him. The song speaks about alienation and the feelings of disillusionment and lack of identity that come through modern society. There are different ways which one can decrease those feelings and promote knowledge, self-awareness and understanding. The museum, when used with all its potential, is one of the ways. But how to do that? That is the question museum professionals ask themselves. This paper analyses how the traditional museum can use the new museology concepts, and the challenges of this approach, to become a vehicle for community development and empowerment, diminishing the feelings sang by The Clash.