857 resultados para complexity of agents
Resumo:
Zinc selenide is a prospective material for optoelectronics. The fabrication of ZnSebased light-emitting diodes is hindered by complexity of p-type doping of the component materials. The interaction between native and impurity defects, the tendency of doping impurity to form associative centres with native defects and the tendency to self-compensation are the main factors impeding effective control of the value and type of conductivity. The thesis is devoted to the study of the processes of interaction between native and impurity defects in zinc selenide. It is established that the Au impurity has the most prominent amphoteric properties in ZnSe among Cu, Ag and Au impurities, as it forms a great number of both Au; donors and Auz„ acceptors. Electrical measurements show that Ag and Au ions introduced into vacant sites of the Zn sublattice form simple single-charged Agz„+ and Auzn+ states with d1° electron configuration, while Cu ions can form both single-charged Cuz„ (d1) and double-charged Cuzr`+ (d`o) centres. Amphoteric properties of Ag and Au transition metals stimulated by time are found for the first time from both electrical and luminescent measurements. A model that explains the changes in electrical and luminescent parameters by displacement of Ag ions into interstitial sites due to lattice deformation forces is proposed. Formation of an Ag;-donor impurity band in ZnSe samples doped with Ag and stored at room temperature is also studied. Thus, the properties of the doped samples are modified due to large lattice relaxation during aging. This fact should be taken into account in optoelectronic applications of doped ZnSe and related compounds.
Resumo:
This paper analyzes repeated procurement of services as a four-stage game divided into two periods. In each period there is (1) a contest stage à la Tullock in which the principal selects an agent and (2) a service stage in which the selected agent provides a service. Since this service effort is non-verifiable, the principal faces a moral hazard problem at the service stages. This work considers how the principal should design the period-two contest to mitigate the moral hazard problem in the period-one service stage and to maximize total service and contest efforts. It is shown that the principal must take account of the agent's past service effort in the period-two contest success function. The results indicate that the optimal way to introduce this `bias' is to choose a certain degree of complementarity between past service and current contest efforts. This result shows that contests with `additive bias' (`multiplicative bias') are optimal in incentive problems when effort cost is low (high). Furthermore, it is shown that the severity of the moral hazard problem increases with the cost of service effort (compared to the cost of contest effort) and the number of agents. Finally, the results are extended to more general contest success functions. JEL classification: C72; D82 Key words: Biased contests; Moral Hazard; Repeated Game; Incentives.
Resumo:
The complexity of the connexions within an economic system can only be reliably reflected in academic research if powerful methods are used. Researchers have used Structural Path Analysis (SPA) to capture not only the linkages within the production system but also the propagation of the effects into different channels of impacts. However, the SPA literature has restricted itself to showing the relations among sectors of production, while the connections between these sectors and final consumption have attracted little attention. In order to consider the complete set of channels involved, in this paper we propose a structural path method that endogenously incorporates not only sectors of production but also the final consumption of the economy. The empirical application comprises water usages, and analyses the dissemination of exogenous impacts into various channels of water consumption. The results show that the responsibility for water stress is imputed to different sectors and depends on the hypothesis used for the role played by final consumption in the model. This highlights the importance of consumers’ decisions in the determination of ecological impacts. Keywords: Input-Output Analysis, Structural Path Analysis, Final Consumption, Water uses.
Resumo:
Shallow upland drains, grips, have been hypothesized as responsible for increased downstream flow magnitudes. Observations provide counterfactual evidence, often relating to the difficulty of inferring conclusions from statistical correlation and paired catchment comparisons, and the complexity of designing field experiments to test grip impacts at the catchment scale. Drainage should provide drier antecedent moisture conditions, providing more storage at the start of an event; however, grips have higher flow velocities than overland flow, thus potentially delivering flow more rapidly to the drainage network. We develop and apply a model for assessing the impacts of grips on flow hydrographs. The model was calibrated on the gripped case, and then the gripped case was compared with the intact case by removing all grips. This comparison showed that even given parameter uncertainty, the intact case had significantly higher flood peaks and lower baseflows, mirroring field observations of the hydrological response of intact peat. The simulations suggest that this is because delivery effects may not translate into catchment-scale impacts for three reasons. First, in our case, the proportions of flow path lengths that were hillslope were not changed significantly by gripping. Second, the structure of the grip network as compared with the structure of the drainage basin mitigated against grip-related increases in the concentration of runoff in the drainage network, although it did marginally reduce the mean timing of that concentration at the catchment outlet. Third, the effect of the latter upon downstream flow magnitudes can only be assessed by reference to the peak timing of other tributary basins, emphasizing that drain effects are both relative and scale dependent. However, given the importance of hillslope flow paths, we show that if upland drainage causes significant changes in surface roughness on hillslopes, then critical and important feedbacks may impact upon the speed of hydrological response. Copyright (c) 2012 John Wiley & Sons, Ltd.
Resumo:
In order to evaluate the relationship between the apparent complexity of hillslope soil moisture and the emergent patterns of catchment hydrological behaviour and water quality, we need fine-resolution catchment-wide data on soil moisture characteristics. This study proposes a methodology whereby vegetation patterns obtained from high-resolution orthorectified aerial photographs are used as an indicator of soil moisture characteristics. This enables us to examine a set of hypotheses regarding what drives the spatial patterns of soil moisture at the catchment scale (material properties or topography). We find that the pattern of Juncus effusus vegetation is controlled largely by topography and mediated by the catchment's material properties. Characterizing topography using the topographic index adds value to the soil moisture predictions relative to slope or upslope contributing area (UCA). However, these predictions depart from the observed soil moisture patterns at very steep slopes or low UCAs. Copyright (c) 2012 John Wiley & Sons, Ltd.
Resumo:
A Fundamentals of Computing Theory course involves different topics that are core to the Computer Science curricula and whose level of abstraction makes them difficult both to teach and to learn. Such difficulty stems from the complexity of the abstract notions involved and the required mathematical background. Surveys conducted among our students showed that many of them were applying some theoretical concepts mechanically rather than developing significant learning. This paper shows a number of didactic strategies that we introduced in the Fundamentals of Computing Theory curricula to cope with the above problem. The proposed strategies were based on a stronger use of technology and a constructivist approach. The final goal was to promote more significant learning of the course topics.
Resumo:
The concept of temporal 'plus' epilepsy (T+E) is not new, and a number of observations made by means of intracerebral electrodes have illustrated the complexity of neuronal circuits that involve the temporal lobe. The term T+E was used to unify and better individualize these specific forms of multilobar epilepsies, which are characterized by electroclinical features primarily suggestive of temporal lobe epilepsy, MRI findings that are either unremarkable or show signs of hippocampal sclerosis, and intracranial recordings which demonstrate that seizures arise from a complex epileptogenic network including a combination of brain regions located within the temporal lobe and over closed neighbouring structures such as the orbitofrontal cortex, the insulo-opercular region, and the temporo-parieto-occipital junction. We will review here how the term of T+E has emerged, what it means, and which practical consideration it raises.
Resumo:
For more than a decade scientists tried to develop methods capable of dating ink by monitoring the loss of phenoxyethanol (PE) over time. While many methods were proposed in the literature, few were really used to solve practical cases and they still raise much concern within the scientific community. In fact, due to the complexity of ink drying processes it is particularly difficult to find a reliable ageing parameter to reproducibly follow ink ageing. Moreover, systematic experiments are required in order to evaluate how different factors actually influence the results over time. Therefore, this work aimed at evaluating the capacity of four different ageing parameters to reliably follow ink ageing over time: (1) the quantity of solvent PE in an ink line, (2) the relative peak area (RPA) normalising the PE results using stable volatile compounds present in the ink formulation, (3) the solvent loss ratio (R%) calculated from PE results obtained by the analyses of naturally and artificially aged samples, (4) a modified solvent loss ratio version (R%*) calculated from RPA results. After the determination of the limits of reliable measurements of the analytical method, the repeatability of the different ageing parameters was evaluated over time, as well as the influence of ink composition, writing pressure and storage conditions on the results. Surprisingly, our results showed that R% was not the most reliable parameter, as it showed the highest standard deviation. Discussion of the results in an ink dating perspective suggests that other proposed parameters, such as RPA values, may be more adequate to follow ink ageing over time.
Resumo:
Place branding is not a new phenomenon. The emphasis placed on place branding has recently become particularly strong and explicit to both practitioners and scholars, in the current context of a growing mobility of capital and people. On the one hand, there is a need for practitioners to better understand place brands and better implement place branding strategies. In this respect, this domain of study can be currently seen as 'practitioner led', and in this regard many contributions assess specific cases in order to find success factors and best practices for place branding. On the other hand, at a more analytical level, recent studies show the complexity of the concept of place branding and argue that place branding works as a process including various stakeholders, in which culture and identity play a crucial role. In the literature, tourists, companies and residents represent the main target groups of place branding. The issues regarding tourists and companies have been examined since long by place promoters, location branders, economists or other scholars. However, the analysis of residents' role in place branding has been overlooked until recently and represents a new interest for researchers. The present research aims to further develop the concept of place branding, both theoretically and empirically. First of all, the paper presents a theoretical overview of place branding, from general basic questions (definition of place, brand and place brand) to specific current debates of the literature. Subsequently, the empirical part consists in a case study of the Grand Genève (Great Geneva).
Resumo:
We present ACACIA, an agent-based program implemented in Java StarLogo 2.0 that simulates a two-dimensional microworld populated by agents, obstacles and goals. Our program simulates how agents can reach long-term goals by following sensorial-motor couplings (SMCs) that control how the agents interact with their environment and other agents through a process of local categorization. Thus, while acting in accordance with this set of SMCs, the agents reach their goals through the emergence of global behaviors. This agent-based simulation program would allow us to understand some psychological processes such as planning behavior from the point of view that the complexity of these processes is the result of agent-environment interaction.
Resumo:
Tämä diplomityökuuluu tietoliikenneverkkojen suunnittelun tutkimukseen ja pohjimmiltaan kohdistuu verkon mallintamiseen. Tietoliikenneverkkojen suunnittelu on monimutkainen ja vaativa ongelma, joka sisältää mutkikkaita ja aikaa vieviä tehtäviä. Tämä diplomityö esittelee ”monikerroksisen verkkomallin”, jonka tarkoitus on auttaa verkon suunnittelijoita selviytymään ongelmien monimutkaisuudesta ja vähentää verkkojen suunnitteluun kuluvaa aikaa. Monikerroksinen verkkomalli perustuu yleisille objekteille, jotka ovat yhteisiä kaikille tietoliikenneverkoille. Tämä tekee mallista soveltuvan mielivaltaisille verkoille, välittämättä verkkokohtaisista ominaisuuksista tai verkon toteutuksessa käytetyistä teknologioista. Malli määrittelee tarkan terminologian ja käyttää kolmea käsitettä: verkon jakaminen tasoihin (plane separation), kerrosten muodostaminen (layering) ja osittaminen (partitioning). Nämä käsitteet kuvataan yksityiskohtaisesti tässä työssä. Monikerroksisen verkkomallin sisäinen rakenne ja toiminnallisuus ovat määritelty käyttäen Unified Modelling Language (UML) -notaatiota. Tämä työ esittelee mallin use case- , paketti- ja luokkakaaviot. Diplomityö esittelee myös tulokset, jotka on saatu vertailemalla monikerroksista verkkomallia muihin verkkomalleihin. Tulokset osoittavat, että monikerroksisella verkkomallilla on etuja muihin malleihin verrattuna.
Resumo:
Diplomityön tarkoituksena oli arvioida akvisition jälkeistä integraatioprosessia. Integraation tarkoitus on mukauttaa ostettu yritys toimivaksi osaksi konsernia. Työn empiirisenä ongelmana oli yleisesti tunnustettu integraatiojohtamisen kompleksisuus. Samoin myöskin akateemisesta kirjallisuudesta puuttui koherentti malli, jolla arvioida integraatiota. Tutkimuskohteena oli akvisitio, jossa suomalainen tietotekniikkan suuryritys osti osake-enemmistön tsekkiläisestä keskisuuresta ohjelmistoyrityksestä. Tutkimuksessa generoitiin integraatiojohtamisen malli tietopohjaiseen organisaatioon. Mallin mukaan integraatio koostuu kolmesta eriävästä, mutta toisiaan tukevasta alueesta: organisaatiokulttuurin yhdentyminen, tietopääoman tasaaminen ja konsernin sisäisten prosessien yhdenmukaistaminen. Näistä kaksi kaksi jälkimmäistä ovat johdettavissa, mutta kulttuurin yhdentymiseen integraatiojohtamisella voidaan vaikuttaa vain katalysoivasti. Organisaatiokulttuuri levittäytyy vain osallisten vuorovaikuksien kautta. Lisäksi tutkimus osoitti, miten akvisitio on revolutionaarinen vaihe yrityksen kehityksessä. Integraation ensimmäinen ajanjakso on revolutionaarista. Tällöin suurimmat ja näkyvimmät johdettavat muutokset pyritään saamaan aikaan, jotta integraatiossa edettäisiin evolutionaariseen kehitykseen. Revolutionaarisen intergaation vetojuhtana toimii integraatiojohto, kun taas evolutionaarinen integraatio etenee osallisten (organisaation jäsenten) itsensä toiminnan ja vuorovaikutusten kautta.
Resumo:
For more than a decade scientists tried to develop methods capable of dating ink by monitoring the loss of phenoxyethanol (PE) over time. While many methods were proposed in the literature, few were really used to solve practical cases and they still raise much concern within the scientific community. In fact, due to the complexity of ink drying processes it is particularly difficult to find a reliable ageing parameter to reproducibly follow ink ageing. Moreover, systematic experiments are required in order to evaluate how different factors actually influence the results over time. Therefore, this work aimed at evaluating the capacity of four different ageing parameters to reliably follow ink ageing over time: (1) the quantity of solvent PE in an ink line, (2) the relative peak area (RPA) normalising the PE results using stable volatile compounds present in the ink formulation, (3) the solvent loss ratio (R%) calculated from PE results obtained by the analyses of naturally and artificially aged samples, (4) a modified solvent loss ratio version (R%*) calculated from RPA results. After the determination of the limits of reliable measurements of the analytical method, the repeatability of the different ageing parameters was evaluated over time, as well as the influence of ink composition, writing pressure and storage conditions on the results. Surprisingly, our results showed that R% was not the most reliable parameter, as it showed the highest standard deviation. Discussion of the results in an ink dating perspective suggests that other proposed parameters, such as RPA values, may be more adequate to follow ink ageing over time.
Resumo:
[eng] We propose two generalizations of the Banzhaf value for partition function form games. In both cases, our approach is based on probability distributions over the set of possible coalition structures that may arise for any given set of agents. First, we introduce a family of values, one for each collection of the latter probability distributions, defined as the Banzhaf value of an expected coalitional game. Then, we provide two characterization results for this new family of values within the framework of all partition function games. Both results rely on a property of neutrality with respect to amalgamation of players. Second, as this collusion transformation fails to be meaningful for simple games in partition function form, we propose another generalization of the Banzhaf value which also builds on probability distributions of the above type. This latter family is characterized by means of a neutrality property which uses an amalgamation transformation of players for which simple games are closed.
Resumo:
[eng] We propose two generalizations of the Banzhaf value for partition function form games. In both cases, our approach is based on probability distributions over the set of possible coalition structures that may arise for any given set of agents. First, we introduce a family of values, one for each collection of the latter probability distributions, defined as the Banzhaf value of an expected coalitional game. Then, we provide two characterization results for this new family of values within the framework of all partition function games. Both results rely on a property of neutrality with respect to amalgamation of players. Second, as this collusion transformation fails to be meaningful for simple games in partition function form, we propose another generalization of the Banzhaf value which also builds on probability distributions of the above type. This latter family is characterized by means of a neutrality property which uses an amalgamation transformation of players for which simple games are closed.