953 resultados para Perturb and Observe
Resumo:
In many real world contexts individuals find themselves in situations where they have to decide between options of behaviour that serve a collective purpose or behaviours which satisfy one’s private interests, ignoring the collective. In some cases the underlying social dilemma (Dawes, 1980) is solved and we observe collective action (Olson, 1965). In others social mobilisation is unsuccessful. The central topic of social dilemma research is the identification and understanding of mechanisms which yield to the observed cooperation and therefore resolve the social dilemma. It is the purpose of this thesis to contribute this research field for the case of public good dilemmas. To do so, existing work that is relevant to this problem domain is reviewed and a set of mandatory requirements is derived which guide theory and method development of the thesis. In particular, the thesis focusses on dynamic processes of social mobilisation which can foster or inhibit collective action. The basic understanding is that success or failure of the required process of social mobilisation is determined by heterogeneous individual preferences of the members of a providing group, the social structure in which the acting individuals are contained, and the embedding of the individuals in economic, political, biophysical, or other external contexts. To account for these aspects and for the involved dynamics the methodical approach of the thesis is computer simulation, in particular agent-based modelling and simulation of social systems. Particularly conductive are agent models which ground the simulation of human behaviour in suitable psychological theories of action. The thesis develops the action theory HAPPenInGS (Heterogeneous Agents Providing Public Goods) and demonstrates its embedding into different agent-based simulations. The thesis substantiates the particular added value of the methodical approach: Starting out from a theory of individual behaviour, in simulations the emergence of collective patterns of behaviour becomes observable. In addition, the underlying collective dynamics may be scrutinised and assessed by scenario analysis. The results of such experiments reveal insights on processes of social mobilisation which go beyond classical empirical approaches and yield policy recommendations on promising intervention measures in particular.
Resumo:
In the past decades since Schumpeter’s influential writings economists have pursued research to examine the role of innovation in certain industries on firm as well as on industry level. Researchers describe innovations as the main trigger of industry dynamics, while policy makers argue that research and education are directly linked to economic growth and welfare. Thus, research and education are an important objective of public policy. Firms and public research are regarded as the main actors which are relevant for the creation of new knowledge. This knowledge is finally brought to the market through innovations. What is more, policy makers support innovations. Both actors, i.e. policy makers and researchers, agree that innovation plays a central role but researchers still neglect the role that public policy plays in the field of industrial dynamics. Therefore, the main objective of this work is to learn more about the interdependencies of innovation, policy and public research in industrial dynamics. The overarching research question of this dissertation asks whether it is possible to analyze patterns of industry evolution – from evolution to co-evolution – based on empirical studies of the role of innovation, policy and public research in industrial dynamics. This work starts with a hypothesis-based investigation of traditional approaches of industrial dynamics. Namely, the testing of a basic assumption of the core models of industrial dynamics and the analysis of the evolutionary patterns – though with an industry which is driven by public policy as example. Subsequently it moves to a more explorative approach, investigating co-evolutionary processes. The underlying questions of the research include the following: Do large firms have an advantage because of their size which is attributable to cost spreading? Do firms that plan to grow have more innovations? What role does public policy play for the evolutionary patterns of an industry? Are the same evolutionary patterns observable as those described in the ILC theories? And is it possible to observe regional co-evolutionary processes of science, innovation and industry evolution? Based on two different empirical contexts – namely the laser and the photovoltaic industry – this dissertation tries to answer these questions and combines an evolutionary approach with a co-evolutionary approach. The first chapter starts with an introduction of the topic and the fields this dissertation is based on. The second chapter provides a new test of the Cohen and Klepper (1996) model of cost spreading, which explains the relationship between innovation, firm size and R&D, at the example of the photovoltaic industry in Germany. First, it is analyzed whether the cost spreading mechanism serves as an explanation for size advantages in this industry. This is related to the assumption that the incentives to invest in R&D increase with the ex-ante output. Furthermore, it is investigated whether firms that plan to grow will have more innovative activities. The results indicate that cost spreading serves as an explanation for size advantages in this industry and, furthermore, growth plans lead to higher amount of innovative activities. What is more, the role public policy plays for industry evolution is not finally analyzed in the field of industrial dynamics. In the case of Germany, the introduction of demand inducing policy instruments stimulated market and industry growth. While this policy immediately accelerated market volume, the effect on industry evolution is more ambiguous. Thus, chapter three analyzes this relationship by considering a model of industry evolution, where demand-inducing policies will be discussed as a possible trigger of development. The findings suggest that these instruments can take the same effect as a technical advance to foster the growth of an industry and its shakeout. The fourth chapter explores the regional co-evolution of firm population size, private-sector patenting and public research in the empirical context of German laser research and manufacturing over more than 40 years from the emergence of the industry to the mid-2000s. The qualitative as well as quantitative evidence is suggestive of a co-evolutionary process of mutual interdependence rather than a unidirectional effect of public research on private-sector activities. Chapter five concludes with a summary, the contribution of this work as well as the implications and an outlook of further possible research.
Resumo:
In this article we compare regression models obtained to predict PhD students’ academic performance in the universities of Girona (Spain) and Slovenia. Explanatory variables are characteristics of PhD student’s research group understood as an egocentered social network, background and attitudinal characteristics of the PhD students and some characteristics of the supervisors. Academic performance was measured by the weighted number of publications. Two web questionnaires were designed, one for PhD students and one for their supervisors and other research group members. Most of the variables were easily comparable across universities due to the careful translation procedure and pre-tests. When direct comparison was not possible we created comparable indicators. We used a regression model in which the country was introduced as a dummy coded variable including all possible interaction effects. The optimal transformations of the main and interaction variables are discussed. Some differences between Slovenian and Girona universities emerge. Some variables like supervisor’s performance and motivation for autonomy prior to starting the PhD have the same positive effect on the PhD student’s performance in both countries. On the other hand, variables like too close supervision by the supervisor and having children have a negative influence in both countries. However, we find differences between countries when we observe the motivation for research prior to starting the PhD which increases performance in Slovenia but not in Girona. As regards network variables, frequency of supervisor advice increases performance in Slovenia and decreases it in Girona. The negative effect in Girona could be explained by the fact that additional contacts of the PhD student with his/her supervisor might indicate a higher workload in addition to or instead of a better advice about the dissertation. The number of external student’s advice relationships and social support mean contact intensity are not significant in Girona, but they have a negative effect in Slovenia. We might explain the negative effect of external advice relationships in Slovenia by saying that a lot of external advice may actually result from a lack of the more relevant internal advice
Resumo:
El marcaje de proteínas con ubiquitina, conocido como ubiquitinación, cumple diferentes funciones que incluyen la regulación de varios procesos celulares, tales como: la degradación de proteínas por medio del proteosoma, la reparación del ADN, la señalización mediada por receptores de membrana, y la endocitosis, entre otras (1). Las moléculas de ubiquitina pueden ser removidas de sus sustratos gracias a la acción de un gran grupo de proteasas, llamadas enzimas deubiquitinizantes (DUBs) (2). Las DUBs son esenciales para la manutención de la homeostasis de la ubiquitina y para la regulación del estado de ubiquitinación de diferentes sustratos. El gran número y la diversidad de DUBs descritas refleja tanto su especificidad como su utilización para regular un amplio espectro de sustratos y vías celulares. Aunque muchas DUBs han sido estudiadas a profundidad, actualmente se desconocen los sustratos y las funciones biológicas de la mayoría de ellas. En este trabajo se investigaron las funciones de las DUBs: USP19, USP4 y UCH-L1. Utilizando varias técnicas de biología molecular y celular se encontró que: i) USP19 es regulada por las ubiquitin ligasas SIAH1 y SIAH2 ii) USP19 es importante para regular HIF-1α, un factor de transcripción clave en la respuesta celular a hipoxia, iii) USP4 interactúa con el proteosoma, iv) La quimera mCherry-UCH-L1 reproduce parcialmente los fenotipos que nuestro grupo ha descrito previamente al usar otros constructos de la misma enzima, y v) UCH-L1 promueve la internalización de la bacteria Yersinia pseudotuberculosis.
Resumo:
The present article analyze the urban transformations happened in the sector of Saint Victorino and Saint Ines in the city of Bogota D.C. between 1948 and 2010, making use of the "Genealogical Methodology" during the process of inquiry that allow to contrast the visions that are usually accepted of "progress" and "urban renovation" in the urban market context by the existence of a informal economic and a population in conditions of marginality that configures a good part of the "popular urban culture" of the Bogota in the 20th and 21st century. This vision permit to observe from various perspectives the changes that happened in this sector of the city, the impacts of the history facts occurred en this time period and, in special, the real effects of a rearrangement urban process that began in 1998 and has been prolonged to date, which has left a significant mark about the urban and social physiognomy of the place.
Resumo:
This paper makes use of a short, sharp, unexpected health shock in the form of the 2010 Colombian Dengue outbreak to examine the direct and indirect impact of negative health shocks on behaviour of households in affected areas. Our analysis combines data from several sources in order to obtain a comprehensive picture of the influence of the outbreak, and furthermore to understand the underlying mechanisms driving the effects. Our initial analysis indicates that the outbreak had a substantial negative effect on the health status of adults and adversely affected their ability to function as usual in their daily lives. In our aggregated school data, in areas with high levels of haemorrhagic Dengue we observe a reduction innational exam attendance (last year of secondary school) and on enrolment rates in primary education. Further analysis aims to exploit detailed individual level data to gain a more in depth understanding of the precise channels through which this disease influenced the behaviour and outcomes of the poor in Colombia.
Resumo:
La medición de la desigualdad de oportunidades con las bases de PISA implican varias limitaciones: (i) la muestra sólo representa una fracción limitada de las cohortes de jóvenes de 15 años en los países en desarrollo y (ii) estas fracciones no son uniformes entre países ni entre periodos. Lo anterior genera dudas sobre la confiabilidad de estas mediciones cuando se usan para comparaciones internacionales: mayor equidad puede ser resultado de una muestra más restringida y más homogénea. A diferencia de enfoques previos basados en reconstrucción de las muestras, el enfoque del documento consiste en proveer un índice bidimensional que incluye logro y acceso como dimensiones del índice. Se utilizan varios métodos de agregación y se observan cambios considerables en los rankings de (in) equidad de oportunidades cuando solo se observa el logro y cuando se observan ambas dimensiones en las pruebas de PISA 2006/2009. Finalmente se propone una generalización del enfoque permitiendo otras dimensiones adicionales y otros pesos utilizados en la agregación.
Resumo:
We present the results of stable carbon and nitrogen isotope analysis of bone collagen for 155 individuals buried at the Later Medieval (13th to early 16th century AD) Gilbertine priory of St. Andrew, Fishergate in the city of York (UK). The data show significant variation in the consumption of marine foods between males and females as well as between individuals buried in different areas of the priory. Specifically, individuals from the crossing of the church and the cloister garth had consumed significantly less marine protein than those from other locations. Isotope data for four individuals diagnosed with diffuse idiopathic skeletal hyperostosis (DISH) are consistent with a diet rich in animal protein. We also observe that isotopic signals of individuals with perimortem sharp force trauma are unusual in the context of the Fishergate dataset. We discuss possible explanations for these patterns and suggest that there may have been a specialist hospital or a local tradition of burying victims of violent conflict at the priory. The results demonstrate how the integration of archaeological, osteological, and isotopic data can provide novel information about Medieval burial and society.
Resumo:
Negative correlations between task performance in dynamic control tasks and verbalizable knowledge, as assessed by a post-task questionnaire, have been interpreted as dissociations that indicate two antagonistic modes of learning, one being “explicit”, the other “implicit”. This paper views the control tasks as finite-state automata and offers an alternative interpretation of these negative correlations. It is argued that “good controllers” observe fewer different state transitions and, consequently, can answer fewer post-task questions about system transitions than can “bad controllers”. Two experiments demonstrate the validity of the argument by showing the predicted negative relationship between control performance and the number of explored state transitions, and the predicted positive relationship between the number of explored state transitions and questionnaire scores. However, the experiments also elucidate important boundary conditions for the critical effects. We discuss the implications of these findings, and of other problems arising from the process control paradigm, for conclusions about implicit versus explicit learning processes.
Resumo:
The spatial distribution of aerosol chemical composition and the evolution of the Organic Aerosol (OA) fraction is investigated based upon airborne measurements of aerosol chemical composition in the planetary boundary layer across Europe. Sub-micron aerosol chemical composition was measured using a compact Time-of-Flight Aerosol Mass Spectrometer (cToF-AMS). A range of sampling conditions were evaluated, including relatively clean background conditions, polluted conditions in North-Western Europe and the near-field to far-field outflow from such conditions. Ammonium nitrate and OA were found to be the dominant chemical components of the sub-micron aerosol burden, with mass fractions ranging from 20--50% each. Ammonium nitrate was found to dominate in North-Western Europe during episodes of high pollution, reflecting the enhanced NO_x and ammonia sources in this region. OA was ubiquitous across Europe and concentrations generally exceeded sulphate by 30--160%. A factor analysis of the OA burden was performed in order to probe the evolution across this large range of spatial and temporal scales. Two separate Oxygenated Organic Aerosol (OOA) components were identified; one representing an aged-OOA, termed Low Volatility-OOA and another representing fresher-OOA, termed Semi Volatile-OOA on the basis of their mass spectral similarity to previous studies. The factors derived from different flights were not chemically the same but rather reflect the range of OA composition sampled during a particular flight. Significant chemical processing of the OA was observed downwind of major sources in North-Western Europe, with the LV-OOA component becoming increasingly dominant as the distance from source and photochemical processing increased. The measurements suggest that the aging of OA can be viewed as a continuum, with a progression from a less oxidised, semi-volatile component to a highly oxidised, less-volatile component. Substantial amounts of pollution were observed far downwind of continental Europe, with OA and ammonium nitrate being the major constituents of the sub-micron aerosol burden. Such anthropogenically perturbed air masses can significantly perturb regional climate far downwind of major source regions.
Resumo:
By making use of TOVS Path-B satellite retrievals and ECMWF reanalyses, correlations between bulk microphysical properties of large-scale semi-transparent cirrus (visible optical thickness between 0.7 and 3.8) and thermodynamic and dynamic properties of the surrounding atmosphere have been studied on a global scale. These clouds constitute about half of all high clouds. The global averages (from 60°N to 60°S) of mean ice crystal diameter, De, and ice water path (IWP) of these clouds are 55 μm and 30 g m−2, respectively. IWP of these cirrus is slightly increasing with cloud-top temperature, whereas De of cold cirrus does not depend on this parameter. Correlations between De and IWp of large-scale cirrus seem to be different in the midlatitudes and in the tropics. However, we observe in general stronger correlations between De and IWP and atmospheric humidity and winds deduced from the ECMWF reanalyses: De and IWP increase both with increasing atmospheric water vapour. There is also a good distinction between different dynamical situations: In humid situations, IWP is on average about 10 gm−2 larger in regions with strong large-scale vertical updraft only that in regions with strong large-scale horizontal winds only, whereas the mean De of cold large-scale cirrus decreases by about 10 μm if both strong large-scale updraft and horizontal winds are present.
Resumo:
Aircraft OH and HO2 measurements made over West Africa during the AMMA field campaign in summer 2006 have been investigated using a box model constrained to observations of long-lived species and physical parameters. "Good" agreement was found for HO2 (modelled to observed gradient of 1.23 ± 0.11). However, the model significantly overpredicts OH concentrations. The reasons for this are not clear, but may reflect instrumental instabilities affecting the OH measurements. Within the model, HOx concentrations in West Africa are controlled by relatively simple photochemistry, with production dominated by ozone photolysis and reaction of O(1D) with water vapour, and loss processes dominated by HO2 + HO2 and HO2 + RO2. Isoprene chemistry was found to influence forested regions. In contrast to several recent field studies in very low NOx and high isoprene environments, we do not observe any dependence of model success for HO2 on isoprene and attribute this to efficient recycling of HOx through RO2 + NO reactions under the moderate NOx concentrations (5–300 ppt NO in the boundary layer, median 76 ppt) encountered during AMMA. This suggests that some of the problems with understanding the impact of isoprene on atmospheric composition may be limited to the extreme low range of NOx concentrations.
Resumo:
A full assessment of para-virtualization is important, because without knowledge about the various overheads, users can not understand whether using virtualization is a good idea or not. In this paper we are very interested in assessing the overheads of running various benchmarks on bare-‐metal, as well as on para-‐virtualization. The idea is to see what the overheads of para-‐ virtualization are, as well as looking at the overheads of turning on monitoring and logging. The knowledge from assessing various benchmarks on these different systems will help a range of users understand the use of virtualization systems. In this paper we assess the overheads of using Xen, VMware, KVM and Citrix, see Table 1. These different virtualization systems are used extensively by cloud-‐users. We are using various Netlib1 benchmarks, which have been developed by the University of Tennessee at Knoxville (UTK), and Oak Ridge National Laboratory (ORNL). In order to assess these virtualization systems, we run the benchmarks on bare-‐metal, then on the para-‐virtualization, and finally we turn on monitoring and logging. The later is important as users are interested in Service Level Agreements (SLAs) used by the Cloud providers, and the use of logging is a means of assessing the services bought and used from commercial providers. In this paper we assess the virtualization systems on three different systems. We use the Thamesblue supercomputer, the Hactar cluster and IBM JS20 blade server (see Table 2), which are all servers available at the University of Reading. A functional virtualization system is multi-‐layered and is driven by the privileged components. Virtualization systems can host multiple guest operating systems, which run on its own domain, and the system schedules virtual CPUs and memory within each Virtual Machines (VM) to make the best use of the available resources. The guest-‐operating system schedules each application accordingly. You can deploy virtualization as full virtualization or para-‐virtualization. Full virtualization provides a total abstraction of the underlying physical system and creates a new virtual system, where the guest operating systems can run. No modifications are needed in the guest OS or application, e.g. the guest OS or application is not aware of the virtualized environment and runs normally. Para-‐virualization requires user modification of the guest operating systems, which runs on the virtual machines, e.g. these guest operating systems are aware that they are running on a virtual machine, and provide near-‐native performance. You can deploy both para-‐virtualization and full virtualization across various virtualized systems. Para-‐virtualization is an OS-‐assisted virtualization; where some modifications are made in the guest operating system to enable better performance. In this kind of virtualization, the guest operating system is aware of the fact that it is running on the virtualized hardware and not on the bare hardware. In para-‐virtualization, the device drivers in the guest operating system coordinate the device drivers of host operating system and reduce the performance overheads. The use of para-‐virtualization [0] is intended to avoid the bottleneck associated with slow hardware interrupts that exist when full virtualization is employed. It has revealed [0] that para-‐ virtualization does not impose significant performance overhead in high performance computing, and this in turn this has implications for the use of cloud computing for hosting HPC applications. The “apparent” improvement in virtualization has led us to formulate the hypothesis that certain classes of HPC applications should be able to execute in a cloud environment, with minimal performance degradation. In order to support this hypothesis, first it is necessary to define exactly what is meant by a “class” of application, and secondly it will be necessary to observe application performance, both within a virtual machine and when executing on bare hardware. A further potential complication is associated with the need for Cloud service providers to support Service Level Agreements (SLA), so that system utilisation can be audited.
Resumo:
Rationalizing non-participation as a resource deficiency in the household, this paper identifies strategies for milk-market development in the Ethiopian highlands. The additional amounts of covariates required for Positive marketable surplus -'distances-to market'-are computed from a model in which production and sales are correlated; sales are left-censored at some Unobserved thresholds production efficiencies are heterogeneous: and the data are in the form of a panel. Incorporating these features into the modeling exercise ant because they are fundamental to the data-generating environment. There are four reasons. First, because production and sales decisions are enacted within the same household, both decisions are affected by the same exogenous shocks, and production and sales are therefore likely to be correlated. Second. because selling, involves time and time is arguably the most important resource available to a subsistence household, the minimum Sales amount is not zero but, rather, some unobserved threshold that lies beyond zero. Third. the Potential existence of heterogeneous abilities in management, ones that lie latent from the econometrician's perspective, suggest that production efficiencies should be permitted to vary across households. Fourth, we observe a single set of households during multiple visits in a single production year. The results convey clearly that institutional and production) innovations alone are insufficient to encourage participation. Market-precipitating innovation requires complementary inputs, especially improvements in human capital and reductions in risk. Copyright (c) 20 08 John Wiley & Sons, Ltd.
Resumo:
Genealogical data have been used very widely to construct indices with which to examine the contribution of plant breeding programmes to the maintenance and enhancement of genetic resources. In this paper we use such indices to examine changes in the genetic diversity of the winter wheat crop in England and Wales between 1923 and 1995. We find that, except for one period characterized by the dominance of imported varieties, the genetic diversity of the winter wheat crop has been remarkably stable. This agrees with many studies of plant breeding programmes elsewhere. However, underlying the stability of the winter wheat crop is accelerating varietal turnover without any significant diversification of the genetic resources used. Moreover, the changes we observe are more directly attributable to changes in the varietal shares of the area under winter wheat than to the genealogical relationship between the varieties sown. We argue, therefore, that while genealogical indices reflect how well plant breeders have retained and exploited the resources with which they started, these indices suffer from a critical limitation. They do not reflect the proportion of the available range of genetic resources which has been effectively utilized in the breeding programme: complex crosses of a given set of varieties can yield high indices, and yet disguise the loss (or non-utilization) of a large proportion of the available genetic diversity.