925 resultados para whether magistrate may order that parties be legally represented in QCAT


Relevância:

100.00% 100.00%

Publicador:

Resumo:

We consider systems that can be described in terms of two kinds of degree of freedom. The corresponding ordering modes may, under certain conditions, be coupled to each other. We may thus assume that the primary ordering mode gives rise to a diffusionless first-order phase transition. The change of its thermodynamic properties as a function of the secondary-ordering-mode state is then analyzed. Two specific examples are discussed. First, we study a three-state Potts model in a binary system. Using mean-field techniques, we obtain the phase diagram and different properties of the system as a function of the distribution of atoms on the different lattice sites. In the second case, the properties of a displacive structural phase transition of martensitic type in a binary alloy are studied as a function of atomic order. Because of the directional character of the martensitic-transition mechanism, we find only a very weak dependence of the entropy on atomic order. Experimental results are found to be in quite good agreement with theoretical predictions.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Purpose : Spirituality and religiousness have been shown to be highly prevalent in patients with schizophrenia. Religion can help instil a positive sense of self, decrease the impact of symptoms and provide social contacts. Religion may also be a source of suffering. In this context, this research explores whether religion remains stable over time. Methods : From an initial cohort of 115 out-patients, 80% completed the 3-years follow-up assessment. In order to study the evolution over time, a hierarchical cluster analysis using average linkage was performed on factorial scores at baseline and follow-up and their differences. A sensitivity analysis was secondarily performed to check if the outcome was influenced by other factors such as changes in mental states using mixed models. Results : Religion was stable over time for 63% patients; positive changes occurred for 20% (i.e., significant increase of religion as a resource or a transformation of negative religion to a positive one) and negative changes for 17% (i.e., decrease of religion as a resource or a transformation of positive religion to a negative one). Change in spirituality and/or religiousness was not associated with social or clinical status, but with reduced subjective quality of life and self-esteem; even after controlling for the influence of age, gender, quality of life and clinical factors at baseline. Conclusions : In this context of patients with chronic schizophrenia, religion appeared to be labile. Qualitative analyses showed that those changes expressed the struggles of patients and suggest that religious issues need to be discussed in clinical settings.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The transportation system is in demand 24/7 and 365 days a year irrespective of neither the weather nor the conditions. Iowa’s transportation system is an integral and essential part of society serving commerce and daily functions of all Iowans across the state. A high quality transportation system serves as the artery for economic activity and, the condition of the infrastructure is a key element for our future growth opportunities. A key component of Iowa’s transportation system is the public roadway system owned and maintained by the state, cities and counties. In order to regularly re-evaluate the conditions of Iowa’s public roadway infrastructure and assess the ability of existing revenues to meet the needs of the system, the Iowa Department of Transportation’s 2006 Road Use Tax Fund (RUTF) report to the legislature included a recommendation that a study be conducted every five years. That recommendation was included in legislation adopted in 2007 and signed into law. The law specifically requires the following (2011 Iowa Code Section 307.31): •“The department shall periodically review the current revenue levels of the road use tax fund and the sufficiency of those revenues for the projected construction and maintenance needs of city, county, and state governments in the future. The department shall submit a written report to the general assembly regarding its findings by December 31 every five years, beginning in 2011. The report may include recommendations concerning funding levels needed to support the future mobility and accessibility for users of Iowa's public road system.” •“The department shall evaluate alternative funding sources for road maintenance and construction and report to the general assembly at least every five years on the advantages and disadvantages and the viability of alternative funding mechanisms.” Consistent with this requirement, the Iowa Department of Transportation (DOT) has prepared this study. Recognizing the importance of actively engaging with the public and transportation stakeholders in any discussion of public roadway conditions and needs, Governor Terry E. Branstad announced on March 8, 2011, the creation of, and appointments to, the Governor’s Transportation 2020 Citizen Advisory Commission (CAC). The CAC was tasked with assisting the Iowa DOT as they assess the condition of Iowa’s roadway system and evaluate current and future funding available to best address system needs. In particular the CAC was directed to gather input from the public and stakeholders regarding the condition of Iowa’s public roadway system, the impact of that system, whether additional funding is needed to maintain/improve the system, and, if so, what funding mechanisms ought to be considered. With this input, the CAC prepared a report and recommendations that were presented to Governor Branstad and the Iowa DOT in November 2011 for use in the development of this study. The CAC’s report is available at www.iowadot.gov/transportation2020/pdfs/CAC%20REPORT%20FINAL%20110211.pdf. The CAC’s report was developed utilizing analysis and information from the Iowa DOT. Therefore, the report forms the basis for this study and the two documents are very similar. Iowa is fortunate to have an extensive public roadway system that provides access to all areas of the state and facilitates the efficient movement of goods and people. However, it is also a tremendous challenge for the state, cities and counties to maintain and improve this system given flattening revenue, lost buying power, changing demands on the system, severe weather, and an aging system. This challenge didn’t appear overnight and for the last decade many studies have been completed to look into the situation and the legislature has taken significant action to begin addressing the situation. In addition, the Iowa DOT and Iowa’s cities and counties have worked jointly and independently to increase efficiency and streamline operations. All of these actions have been successful and resulted in significant changes; however, it is apparent much more needs to be done. A well-maintained, high-quality transportation system reduces transportation costs and provides consistent and reliable service. These are all factors that are critical in the evaluation companies undertake when deciding where to expand or locate new developments. The CAC and Iowa DOT heard from many Iowans that additional investment in Iowa’s roadway system is vital to support existing jobs and continued job creation in the state of Iowa. Beginning June 2011, the CAC met regularly to review material and discuss potential recommendations to address Iowa’s roadway funding challenges. This effort included extensive public outreach with meetings held in seven locations across Iowa and through a Transportation 2020 website hosted by the Iowa DOT (www.iowadot.gov/transportation2020). Over 500 people attended the public meetings held through the months of August and September, with 198 providing verbal or written comment at the meetings or through the website. Comments were received from a wide array of individuals. The public comments demonstrated overwhelming support for increased funding for Iowa’s roads. Through the public input process, several guiding principles were established to guide the development of recommendations. Those guiding principles are: • Additional revenues are restricted for road and bridge improvements only, like 95 percent of the current state road revenue is currently. This includes the fuel tax and registration fees. • State and local governments continue to streamline and become more efficient, both individually and by looking for ways to do things collectively. • User fee concept is preserved, where those who use the roads pay for them, including non¬residents. • Revenue-generating methods equitable across users. • Increase revenue generating mechanisms that are viable now but begin to implement and set the stage for longer-term solutions that bring equity and stability to road funding. • Continue Iowa’s long standing tradition of state roadway financing coming from pay-as-you-go financing. Iowa must not fall into the situation that other states are currently facing where the majority of their new program dollars are utilized to pay the debt service of past bonding. Based on the analysis of Iowa’s public roadway needs and revenue and the extensive work of the Governor’s Transportation 2020 Citizen Advisory Commission, the Iowa DOT has identified specific recommendations. The recommendations follow very closely the recommendations of the CAC (CAC recommendations from their report are repeated in Appendix B). Following is a summary of the recommendations which are fully documented beginning on page 21. 1. Through a combination of efficiency savings and increased revenue, a minimum of $215 million of revenue per year should be generated to meet Iowa’s critical roadway needs. 2. The Code of Iowa should be changed to require the study of the sufficiency of the state’s road funds to meet the road system’s needs every two years instead of every five years to coincide with the biennial legislative budget appropriation schedule. 3.Modify the current registration fee for electric vehicles to be based on weight and value using the same formula that applies to most passenger vehicles. 4.Consistent with existing Code of Iowa requirements, new funding should go to the TIME-21 Fund up to the cap ($225 million) and remaining new funding should be distributed consistent with the Road Use Tax Fund distribution formula. 5.The CAC recommended the Iowa DOT at least annually convene meetings with cities and counties to review the operation, maintenance and improvement of Iowa’s public roadway system to identify ways to jointly increase efficiency. In direct response to this recommendation, Governor Branstad directed the Iowa DOT to begin this effort immediately with a target of identifying $50 million of efficiency savings that can be captured from the over $1 billion of state revenue already provided to the Iowa DOT and Iowa’s cities and counties to administer, maintain and improve Iowa’s public roadway system. This would build upon past joint and individual actions that have reduced administrative costs and resulted in increased funding for improvement of Iowa’s public roadway system. Efficiency actions should be quantified, measured and reported to the public on a regular basis. 6.By June 30, 2012, Iowa DOT should complete a study of vehicles and equipment that use Iowa’s public roadway system but pay no user fees or substantially lower user fees than other vehicles and equipment.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The adult mammalian forebrain contains neural stem/progenitor cells (NSCs) that generate neurons throughout life. As in other somatic stem cell systems, NSCs are proposed to be predominantly quiescent and proliferate only sporadically to produce more committed progeny. However, quiescence has recently been shown not to be an essential criterion for stem cells. It is not known whether NSCs show differences in molecular dependence based on their proliferation state. The subventricular zone (SVZ) of the adult mouse brain has a remarkable capacity for repair by activation of NSCs. The molecular interplay controlling adult NSCs during neurogenesis or regeneration is not clear but resolving these interactions is critical in order to understand brain homeostasis and repair. Using conditional genetics and fate mapping, we show that Notch signaling is essential for neurogenesis in the SVZ. By mosaic analysis, we uncovered a surprising difference in Notch dependence between active neurogenic and regenerative NSCs. While both active and regenerative NSCs depend upon canonical Notch signaling, Notch1-deletion results in a selective loss of active NSCs (aNSCs). In sharp contrast, quiescent NSCs (qNSCs) remain after Notch1 ablation until induced during regeneration or aging, whereupon they become Notch1-dependent and fail to fully reinstate neurogenesis. Our results suggest that Notch1 is a key component of the adult SVZ niche, promoting maintenance of aNSCs, and that this function is compensated in qNSCs. Therefore, we confirm the importance of Notch signaling for maintaining NSCs and neurogenesis in the adult SVZ and reveal that NSCs display a selective reliance on Notch1 that may be dictated by mitotic state.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Introduction Societies of ants, bees, wasps and termites dominate many terrestrial ecosystems (Wilson 1971). Their evolutionary and ecological success is based upon the regulation of internal conflicts (e.g. Ratnieks et al. 2006), control of diseases (e.g. Schmid-Hempel 1998) and individual skills and collective intelligence in resource acquisition, nest building and defence (e.g. Camazine 2001). Individuals in social species can pass on their genes not only directly trough their own offspring, but also indirectly by favouring the reproduction of relatives. The inclusive fitness theory of Hamilton (1963; 1964) provides a powerful explanation for the evolution of reproductive altruism and cooperation in groups with related individuals. The same theory also led to the realization that insect societies are subject to internal conflicts over reproduction. Relatedness of less-than-one is not sufficient to eliminate all incentive for individual selfishness. This would indeed require a relatedness of one, as found among cells of an organism (Hardin 1968; Keller 1999). The challenge for evolutionary biology is to understand how groups can prevent or reduce the selfish exploitation of resources by group members, and how societies with low relatedness are maintained. In social insects the evolutionary shift from single- to multiple queens colonies modified the relatedness structure, the dispersal, and the mode of colony founding (e.g. (Crozier & Pamilo 1996). In ants, the most common, and presumably ancestral mode of reproduction is the emission of winged males and females, which found a new colony independently after mating and dispersal flights (Hölldobler & Wilson 1990). The alternative reproductive tactic for ant queens in multiple-queen colonies (polygyne) is to seek to be re-accepted in their natal colonies, where they may remain as additional reproductives or subsequently disperse on foot with part of the colony (budding) (Bourke & Franks 1995; Crozier & Pamilo 1996; Hölldobler & Wilson 1990). Such ant colonies can contain up to several hundred reproductive queens with an even more numerous workforce (Cherix 1980; Cherix 1983). As a consequence in polygynous ants the relatedness among nestmates is very low, and workers raise brood of queens to which they are only distantly related (Crozier & Pamilo 1996; Queller & Strassmann 1998). Therefore workers could increase their inclusive fitness by preferentially caring for their closest relatives and discriminate against less related or foreign individuals (Keller 1997; Queller & Strassmann 2002; Tarpy et al. 2004). However, the bulk of the evidence suggests that social insects do not behave nepotistically, probably because of the costs entailed by decreased colony efficiency or discrimination errors (Keller 1997). Recently, the consensus that nepotistic behaviour does not occur in insect colonies was challenged by a study in the ant Formica fusca (Hannonen & Sundström 2003b) showing that the reproductive share of queens more closely related to workers increases during brood development. However, this pattern can be explained either by nepotism with workers preferentially rearing the brood of more closely related queens or intrinsic differences in the viability of eggs laid by queens. In the first chapter, we designed an experiment to disentangle nepotism and differences in brood viability. We tested if workers prefer to rear their kin when given the choice between highly related and unrelated brood in the ant F. exsecta. We also looked for differences in egg viability among queens and simulated if such differences in egg viability may mistakenly lead to the conclusion that workers behave nepotistically. The acceptance of queens in polygnous ants raises the question whether the varying degree of relatedness affects their share in reproduction. In such colonies workers should favour nestmate queens over foreign queens. Numerous studies have investigated reproductive skew and partitioning of reproduction among queens (Bourke et al. 1997; Fournier et al. 2004; Fournier & Keller 2001; Hammond et al. 2006; Hannonen & Sundström 2003a; Heinze et al. 2001; Kümmerli & Keller 2007; Langer et al. 2004; Pamilo & Seppä 1994; Ross 1988; Ross 1993; Rüppell et al. 2002), yet almost no information is available on whether differences among queens in their relatedness to other colony members affects their share in reproduction. Such data are necessary to compare the relative reproductive success of dispersing and non-dispersing individuals. Moreover, information on whether there is a difference in reproductive success between resident and dispersing queens is also important for our understanding of the genetic structure of ant colonies and the dynamics of within group conflicts. In chapter two, we created single-queen colonies and then introduced a foreign queens originating from another colony kept under similar conditions in order to estimate the rate of queen acceptance into foreign established colonies, and to quantify the reproductive share of resident and introduced queens. An increasing number of studies have investigated the discrimination ability between ant workers (e.g. Holzer et al. 2006; Pedersen et al. 2006), but few have addressed the recognition and discrimination behaviour of workers towards reproductive individuals entering colonies (Bennett 1988; Brown et al. 2003; Evans 1996; Fortelius et al. 1993; Kikuchi et al. 2007; Rosengren & Pamilo 1986; Stuart et al. 1993; Sundström 1997; Vásquez & Silverman in press). These studies are important, because accepting new queens will generally have a large impact on colony kin structure and inclusive fitness of workers (Heinze & Keller 2000). In chapter three, we examined whether resident workers reject young foreign queens that enter into their nest. We introduced mated queens into their natal nest, a foreign-female producing nest, or a foreign male-producing nest and measured their survival. In addition, we also introduced young virgin and mated queens into their natal nest to examine whether the mating status of the queens influences their survival and acceptance by workers. On top of polgyny, some ant species have evolved an extraordinary social organization called 'unicoloniality' (Hölldobler & Wilson 1977; Pedersen et al. 2006). In unicolonial ants, intercolony borders are absent and workers and queens mix among the physically separated nests, such that nests form one large supercolony. Super-colonies can become very large, so that direct cooperative interactions are impossible between individuals of distant nests. Unicoloniality is an evolutionary paradox and a potential problem for kin selection theory because the mixing of queens and workers between nests leads to extremely low relatedness among nestmates (Bourke & Franks 1995; Crozier & Pamilo 1996; Keller 1995). A better understanding of the evolution and maintenance of unicoloniality requests detailed information on the discrimination behavior, dispersal, population structure, and the scale of competition. Cryptic genetic population structure may provide important information on the relevant scale to be considered when measuring relatedness and the role of kin selection. Theoretical studies have shown that relatedness should be measured at the level of the `economic neighborhood', which is the scale at which intraspecific competition generally takes place (Griffin & West 2002; Kelly 1994; Queller 1994; Taylor 1992). In chapter four, we conducted alarge-scale study to determine whether the unicolonial ant Formica paralugubris forms populations that are organised in discrete supercolonies or whether there is a continuous gradation in the level of aggression that may correlate with genetic isolation by distance and/or spatial distance between nests. In chapter five, we investigated the fine-scale population structure in three populations of F. paralugubris. We have developed mitochondria) markers, which together with the nuclear markers allowed us to detect cryptic genetic clusters of nests, to obtain more precise information on the genetic differentiation within populations, and to separate male and female gene flow. These new data provide important information on the scale to be considered when measuring relatedness in native unicolonial populations.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The interaction of tunneling with groundwater is a problem both from an environmental and an engineering point of view. In fact, tunnel drilling may cause a drawdown of piezometric levels and water inflows into tunnels that may cause problems during excavation of the tunnel. While the influence of tunneling on the regional groundwater systems may be adequately predicted in porous media using analytical solutions, such an approach is difficult to apply in fractured rocks. Numerical solutions are preferable and various conceptual approaches have been proposed to describe and model groundwater flow through fractured rock masses, ranging from equivalent continuum models to discrete fracture network simulation models. However, their application needs many preliminary investigations on the behavior of the groundwater system based on hydrochemical and structural data. To study large scale flow systems in fractured rocks of mountainous terrains, a comprehensive study was conducted in southern Switzerland, using as case studies two infrastructures actually under construction: (i) the Monte Ceneri base railway tunnel (Ticino), and the (ii) San Fedele highway tunnel (Roveredo, Graubiinden). The chosen approach in this study combines the temporal and spatial variation of geochemical and geophysical measurements. About 60 localities from both surface and underlying tunnels were temporarily and spatially monitored during more than one year. At first, the project was focused on the collection of hydrochemical and structural data. A number of springs, selected in the area surrounding the infrastructures, were monitored for discharge, electric conductivity, pH, and temperature. Water samples (springs, tunnel inflows and rains) were taken for isotopic analysis; in particular the stable isotope composition (δ2Η, δ180 values) can reflect the origin of the water, because of spatial (recharge altitude, topography, etc.) and temporal (seasonal) effects on precipitation which in turn strongly influence the isotopic composition of groundwater. Tunnel inflows in the accessible parts of the tunnels were also sampled and, if possible, monitored with time. Noble-gas concentrations and their isotope ratios were used in selected locations to better understand the origin and the circulation of the groundwater. In addition, electrical resistivity and VLF-type electromagnetic surveys were performed to identify water bearing fractures and/or weathered areas that could be intersected at depth during tunnel construction. The main goal of this work was to demonstrate that these hydrogeological data and geophysical methods, combined with structural and hydrogeological information, can be successfully used in order to develop hydrogeological conceptual models of the groundwater flow in regions to be exploited for tunnels. The main results of the project are: (i) to have successfully tested the application of electrical resistivity and VLF-electromagnetic surveys to asses water-bearing zones during tunnel drilling; (ii) to have verified the usefulness of noble gas, major ion and stable isotope compositions as proxies for the detection of faults and to understand the origin of the groundwater and its flow regimes (direct rain water infiltration or groundwater of long residence time); and (iii) to have convincingly tested the combined application of a geochemical and geophysical approach to assess and predict the vulnerability of springs to tunnel drilling. - L'interférence entre eaux souterraines et des tunnels pose des problèmes environnementaux et de génie civile. En fait, la construction d'un tunnel peut faire abaisser le niveau des nappes piézométriques et faire infiltrer de l'eau dans le tunnel et ainsi créer des problème pendant l'excavation. Alors que l'influence de la construction d'un tunnel sur la circulation régionale de l'eau souterraine dans des milieux poreux peut être prédite relativement facilement par des solution analytiques de modèles, ceci devient difficile dans des milieux fissurés. Dans ce cas-là, des solutions numériques sont préférables et plusieurs approches conceptuelles ont été proposées pour décrire et modéliser la circulation d'eau souterraine à travers les roches fissurées, en allant de modèles d'équivalence continue à des modèles de simulation de réseaux de fissures discrètes. Par contre, leur application demande des investigations importantes concernant le comportement du système d'eau souterraine basées sur des données hydrochimiques et structurales. Dans le but d'étudier des grands systèmes de circulation d'eau souterraine dans une région de montagnes, une étude complète a été fait en Suisse italienne, basée sur deux grandes infrastructures actuellement en construction: (i) Le tunnel ferroviaire de base du Monte Ceneri (Tessin) et (ii) le tunnel routière de San Fedele (Roveredo, Grisons). L'approche choisie dans cette étude est la combinaison de variations temporelles et spatiales des mesures géochimiques et géophysiques. Environs 60 localités situées à la surface ainsi que dans les tunnels soujacents ont été suiviès du point de vue temporel et spatial pendant plus de un an. Dans un premier temps le projet se focalisait sur la collecte de données hydrochimiques et structurales. Un certain nombre de sources, sélectionnées dans les environs des infrastructures étudiées ont été suivies pour le débit, la conductivité électrique, le pH et la température. De l'eau (sources, infiltration d'eau de tunnel et pluie) a été échantillonnés pour des analyses isotopiques; ce sont surtout les isotopes stables (δ2Η, δ180) qui peuvent indiquer l'origine d'une eaux, à cause de la dépendance d'effets spatiaux (altitude de recharge, topographie etc.) ainsi que temporels (saisonaux) sur les précipitations météoriques , qui de suite influencent ainsi la composition isotopique de l'eau souterraine. Les infiltrations d'eau dans les tunnels dans les parties accessibles ont également été échantillonnées et si possible suivies au cours du temps. La concentration de gaz nobles et leurs rapports isotopiques ont également été utilisées pour quelques localités pour mieux comprendre l'origine et la circulation de l'eau souterraine. En plus, des campagnes de mesures de la résistivité électrique et électromagnétique de type VLF ont été menées afin d'identifier des zone de fractures ou d'altération qui pourraient interférer avec les tunnels en profondeur pendant la construction. Le but principal de cette étude était de démontrer que ces données hydrogéologiques et géophysiques peuvent être utilisées avec succès pour développer des modèles hydrogéologiques conceptionels de tunnels. Les résultats principaux de ce travail sont : i) d'avoir testé avec succès l'application de méthodes de la tomographie électrique et des campagnes de mesures électromagnétiques de type VLF afin de trouver des zones riches en eau pendant l'excavation d'un tunnel ; ii) d'avoir prouvé l'utilité des gaz nobles, des analyses ioniques et d'isotopes stables pour déterminer l'origine de l'eau infiltrée (de la pluie par le haut ou ascendant de l'eau remontant des profondeurs) et leur flux et pour déterminer la position de failles ; et iii) d'avoir testé d'une manière convainquant l'application combinée de méthodes géochimiques et géophysiques pour juger et prédire la vulnérabilité de sources lors de la construction de tunnels. - L'interazione dei tunnel con il circuito idrico sotterraneo costituisce un problema sia dal punto di vista ambientale che ingegneristico. Lo scavo di un tunnel puô infatti causare abbassamenti dei livelli piezometrici, inoltre le venute d'acqua in galleria sono un notevole problema sia in fase costruttiva che di esercizio. Nel caso di acquiferi in materiale sciolto, l'influenza dello scavo di un tunnel sul circuito idrico sotterraneo, in genere, puô essere adeguatamente predetta attraverso l'applicazione di soluzioni analitiche; al contrario un approccio di questo tipo appare inadeguato nel caso di scavo in roccia. Per gli ammassi rocciosi fratturati sono piuttosto preferibili soluzioni numeriche e, a tal proposito, sono stati proposti diversi approcci concettuali; nella fattispecie l'ammasso roccioso puô essere modellato come un mezzo discreto ο continuo équivalente. Tuttavia, una corretta applicazione di qualsiasi modello numerico richiede necessariamente indagini preliminari sul comportamento del sistema idrico sotterraneo basate su dati idrogeochimici e geologico strutturali. Per approfondire il tema dell'idrogeologia in ammassi rocciosi fratturati tipici di ambienti montani, è stato condotto uno studio multidisciplinare nel sud della Svizzera sfruttando come casi studio due infrastrutture attualmente in costruzione: (i) il tunnel di base del Monte Ceneri (canton Ticino) e (ii) il tunnel autostradale di San Fedele (Roveredo, canton Grigioni). L'approccio di studio scelto ha cercato di integrare misure idrogeochimiche sulla qualité e quantité delle acque e indagini geofisiche. Nella fattispecie sono state campionate le acque in circa 60 punti spazialmente distribuiti sia in superficie che in sotterraneo; laddove possibile il monitoraggio si è temporalmente prolungato per più di un anno. In una prima fase, il progetto di ricerca si è concentrato sull'acquisizione dati. Diverse sorgenti, selezionate nelle aree di possibile influenza attorno allé infrastrutture esaminate, sono state monitorate per quel che concerne i parametri fisico-chimici: portata, conduttività elettrica, pH e temperatura. Campioni d'acqua sono stati prelevati mensilmente su sorgenti, venute d'acqua e precipitazioni, per analisi isotopiche; nella fattispecie, la composizione in isotopi stabili (δ2Η, δ180) tende a riflettere l'origine delle acque, in quanto, variazioni sia spaziali (altitudine di ricarica, topografia, etc.) che temporali (variazioni stagionali) della composizione isotopica delle precipitazioni influenzano anche le acque sotterranee. Laddove possibile, sono state campionate le venute d'acqua in galleria sia puntualmente che al variare del tempo. Le concentrazioni dei gas nobili disciolti nell'acqua e i loro rapporti isotopici sono stati altresi utilizzati in alcuni casi specifici per meglio spiegare l'origine delle acque e le tipologie di circuiti idrici sotterranei. Inoltre, diverse indagini geofisiche di resistività elettrica ed elettromagnetiche a bassissima frequenza (VLF) sono state condotte al fine di individuare le acque sotterranee circolanti attraverso fratture dell'ammasso roccioso. Principale obiettivo di questo lavoro è stato dimostrare come misure idrogeochimiche ed indagini geofisiche possano essere integrate alio scopo di sviluppare opportuni modelli idrogeologici concettuali utili per lo scavo di opere sotterranee. I principali risultati ottenuti al termine di questa ricerca sono stati: (i) aver testato con successo indagini geofisiche (ERT e VLF-EM) per l'individuazione di acque sotterranee circolanti attraverso fratture dell'ammasso roccioso e che possano essere causa di venute d'acqua in galleria durante lo scavo di tunnel; (ii) aver provato l'utilità di analisi su gas nobili, ioni maggiori e isotopi stabili per l'individuazione di faglie e per comprendere l'origine delle acque sotterranee (acque di recente infiltrazione ο provenienti da circolazioni profonde); (iii) aver testato in maniera convincente l'integrazione delle indagini geofisiche e di misure geochimiche per la valutazione della vulnérabilité delle sorgenti durante lo scavo di nuovi tunnel. - "La NLFA (Nouvelle Ligne Ferroviaire à travers les Alpes) axe du Saint-Gothard est le plus important projet de construction de Suisse. En bâtissant la nouvelle ligne du Saint-Gothard, la Suisse réalise un des plus grands projets de protection de l'environnement d'Europe". Cette phrase, qu'on lit comme présentation du projet Alptransit est particulièrement éloquente pour expliquer l'utilité des nouvelles lignes ferroviaires transeuropéens pour le développement durable. Toutefois, comme toutes grandes infrastructures, la construction de nouveaux tunnels ont des impacts inévitables sur l'environnement. En particulier, le possible drainage des eaux souterraines réalisées par le tunnel peut provoquer un abaissement du niveau des nappes piézométriques. De plus, l'écoulement de l'eau à l'intérieur du tunnel, conduit souvent à des problèmes d'ingénierie. Par exemple, d'importantes infiltrations d'eau dans le tunnel peuvent compliquer les phases d'excavation, provoquant un retard dans l'avancement et dans le pire des cas, peuvent mettre en danger la sécurité des travailleurs. Enfin, l'infiltration d'eau peut être un gros problème pendant le fonctionnement du tunnel. Du point de vue de la science, avoir accès à des infrastructures souterraines représente une occasion unique d'obtenir des informations géologiques en profondeur et pour échantillonner des eaux autrement inaccessibles. Dans ce travail, nous avons utilisé une approche pluridisciplinaire qui intègre des mesures d'étude hydrogéochimiques effectués sur les eaux de surface et des investigations géophysiques indirects, tels que la tomographic de résistivité électrique (TRE) et les mesures électromagnétiques de type VLF. L'étude complète a été fait en Suisse italienne, basée sur deux grandes infrastructures actuellement en construction, qui sont le tunnel ferroviaire de base du Monte Ceneri, une partie du susmentionné projet Alptransit, situé entièrement dans le canton Tessin, et le tunnel routière de San Fedele, situé a Roveredo dans le canton des Grisons. Le principal objectif était de montrer comment il était possible d'intégrer les deux approches, géophysiques et géochimiques, afin de répondre à la question de ce que pourraient être les effets possibles dû au drainage causés par les travaux souterrains. L'accès aux galeries ci-dessus a permis une validation adéquate des enquêtes menées confirmant, dans chaque cas, les hypothèses proposées. A cette fin, nous avons fait environ 50 profils géophysiques (28 imageries électrique bidimensionnels et 23 électromagnétiques) dans les zones de possible influence par le tunnel, dans le but d'identifier les fractures et les discontinuités dans lesquelles l'eau souterraine peut circuler. De plus, des eaux ont été échantillonnés dans 60 localités situées la surface ainsi que dans les tunnels subjacents, le suivi mensuelle a duré plus d'un an. Nous avons mesurés tous les principaux paramètres physiques et chimiques: débit, conductivité électrique, pH et température. De plus, des échantillons d'eaux ont été prélevés pour l'analyse mensuelle des isotopes stables de l'hydrogène et de l'oxygène (δ2Η, δ180). Avec ces analyses, ainsi que par la mesure des concentrations des gaz rares dissous dans les eaux et de leurs rapports isotopiques que nous avons effectués dans certains cas spécifiques, il était possible d'expliquer l'origine des différents eaux souterraines, les divers modes de recharge des nappes souterraines, la présence de possible phénomènes de mélange et, en général, de mieux expliquer les circulations d'eaux dans le sous-sol. Le travail, même en constituant qu'une réponse partielle à une question très complexe, a permis d'atteindre certains importants objectifs. D'abord, nous avons testé avec succès l'applicabilité des méthodes géophysiques indirectes (TRE et électromagnétiques de type VLF) pour prédire la présence d'eaux souterraines dans le sous-sol des massifs rocheux. De plus, nous avons démontré l'utilité de l'analyse des gaz rares, des isotopes stables et de l'analyses des ions majeurs pour la détection de failles et pour comprendre l'origine des eaux souterraines (eau de pluie par le haut ou eau remontant des profondeurs). En conclusion, avec cette recherche, on a montré que l'intégration des ces informations (géophysiques et géochimiques) permet le développement de modèles conceptuels appropriés, qui permettant d'expliquer comment l'eau souterraine circule. Ces modèles permettent de prévoir les infiltrations d'eau dans les tunnels et de prédire la vulnérabilité de sources et des autres ressources en eau lors de construction de tunnels.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

EXECUTIVE SUMMARY : Evaluating Information Security Posture within an organization is becoming a very complex task. Currently, the evaluation and assessment of Information Security are commonly performed using frameworks, methodologies and standards which often consider the various aspects of security independently. Unfortunately this is ineffective because it does not take into consideration the necessity of having a global and systemic multidimensional approach to Information Security evaluation. At the same time the overall security level is globally considered to be only as strong as its weakest link. This thesis proposes a model aiming to holistically assess all dimensions of security in order to minimize the likelihood that a given threat will exploit the weakest link. A formalized structure taking into account all security elements is presented; this is based on a methodological evaluation framework in which Information Security is evaluated from a global perspective. This dissertation is divided into three parts. Part One: Information Security Evaluation issues consists of four chapters. Chapter 1 is an introduction to the purpose of this research purpose and the Model that will be proposed. In this chapter we raise some questions with respect to "traditional evaluation methods" as well as identifying the principal elements to be addressed in this direction. Then we introduce the baseline attributes of our model and set out the expected result of evaluations according to our model. Chapter 2 is focused on the definition of Information Security to be used as a reference point for our evaluation model. The inherent concepts of the contents of a holistic and baseline Information Security Program are defined. Based on this, the most common roots-of-trust in Information Security are identified. Chapter 3 focuses on an analysis of the difference and the relationship between the concepts of Information Risk and Security Management. Comparing these two concepts allows us to identify the most relevant elements to be included within our evaluation model, while clearing situating these two notions within a defined framework is of the utmost importance for the results that will be obtained from the evaluation process. Chapter 4 sets out our evaluation model and the way it addresses issues relating to the evaluation of Information Security. Within this Chapter the underlying concepts of assurance and trust are discussed. Based on these two concepts, the structure of the model is developed in order to provide an assurance related platform as well as three evaluation attributes: "assurance structure", "quality issues", and "requirements achievement". Issues relating to each of these evaluation attributes are analysed with reference to sources such as methodologies, standards and published research papers. Then the operation of the model is discussed. Assurance levels, quality levels and maturity levels are defined in order to perform the evaluation according to the model. Part Two: Implementation of the Information Security Assurance Assessment Model (ISAAM) according to the Information Security Domains consists of four chapters. This is the section where our evaluation model is put into a welldefined context with respect to the four pre-defined Information Security dimensions: the Organizational dimension, Functional dimension, Human dimension, and Legal dimension. Each Information Security dimension is discussed in a separate chapter. For each dimension, the following two-phase evaluation path is followed. The first phase concerns the identification of the elements which will constitute the basis of the evaluation: ? Identification of the key elements within the dimension; ? Identification of the Focus Areas for each dimension, consisting of the security issues identified for each dimension; ? Identification of the Specific Factors for each dimension, consisting of the security measures or control addressing the security issues identified for each dimension. The second phase concerns the evaluation of each Information Security dimension by: ? The implementation of the evaluation model, based on the elements identified for each dimension within the first phase, by identifying the security tasks, processes, procedures, and actions that should have been performed by the organization to reach the desired level of protection; ? The maturity model for each dimension as a basis for reliance on security. For each dimension we propose a generic maturity model that could be used by every organization in order to define its own security requirements. Part three of this dissertation contains the Final Remarks, Supporting Resources and Annexes. With reference to the objectives of our thesis, the Final Remarks briefly analyse whether these objectives were achieved and suggest directions for future related research. Supporting resources comprise the bibliographic resources that were used to elaborate and justify our approach. Annexes include all the relevant topics identified within the literature to illustrate certain aspects of our approach. Our Information Security evaluation model is based on and integrates different Information Security best practices, standards, methodologies and research expertise which can be combined in order to define an reliable categorization of Information Security. After the definition of terms and requirements, an evaluation process should be performed in order to obtain evidence that the Information Security within the organization in question is adequately managed. We have specifically integrated into our model the most useful elements of these sources of information in order to provide a generic model able to be implemented in all kinds of organizations. The value added by our evaluation model is that it is easy to implement and operate and answers concrete needs in terms of reliance upon an efficient and dynamic evaluation tool through a coherent evaluation system. On that basis, our model could be implemented internally within organizations, allowing them to govern better their Information Security. RÉSUMÉ : Contexte général de la thèse L'évaluation de la sécurité en général, et plus particulièrement, celle de la sécurité de l'information, est devenue pour les organisations non seulement une mission cruciale à réaliser, mais aussi de plus en plus complexe. A l'heure actuelle, cette évaluation se base principalement sur des méthodologies, des bonnes pratiques, des normes ou des standards qui appréhendent séparément les différents aspects qui composent la sécurité de l'information. Nous pensons que cette manière d'évaluer la sécurité est inefficiente, car elle ne tient pas compte de l'interaction des différentes dimensions et composantes de la sécurité entre elles, bien qu'il soit admis depuis longtemps que le niveau de sécurité globale d'une organisation est toujours celui du maillon le plus faible de la chaîne sécuritaire. Nous avons identifié le besoin d'une approche globale, intégrée, systémique et multidimensionnelle de l'évaluation de la sécurité de l'information. En effet, et c'est le point de départ de notre thèse, nous démontrons que seule une prise en compte globale de la sécurité permettra de répondre aux exigences de sécurité optimale ainsi qu'aux besoins de protection spécifiques d'une organisation. Ainsi, notre thèse propose un nouveau paradigme d'évaluation de la sécurité afin de satisfaire aux besoins d'efficacité et d'efficience d'une organisation donnée. Nous proposons alors un modèle qui vise à évaluer d'une manière holistique toutes les dimensions de la sécurité, afin de minimiser la probabilité qu'une menace potentielle puisse exploiter des vulnérabilités et engendrer des dommages directs ou indirects. Ce modèle se base sur une structure formalisée qui prend en compte tous les éléments d'un système ou programme de sécurité. Ainsi, nous proposons un cadre méthodologique d'évaluation qui considère la sécurité de l'information à partir d'une perspective globale. Structure de la thèse et thèmes abordés Notre document est structuré en trois parties. La première intitulée : « La problématique de l'évaluation de la sécurité de l'information » est composée de quatre chapitres. Le chapitre 1 introduit l'objet de la recherche ainsi que les concepts de base du modèle d'évaluation proposé. La maniéré traditionnelle de l'évaluation de la sécurité fait l'objet d'une analyse critique pour identifier les éléments principaux et invariants à prendre en compte dans notre approche holistique. Les éléments de base de notre modèle d'évaluation ainsi que son fonctionnement attendu sont ensuite présentés pour pouvoir tracer les résultats attendus de ce modèle. Le chapitre 2 se focalise sur la définition de la notion de Sécurité de l'Information. Il ne s'agit pas d'une redéfinition de la notion de la sécurité, mais d'une mise en perspectives des dimensions, critères, indicateurs à utiliser comme base de référence, afin de déterminer l'objet de l'évaluation qui sera utilisé tout au long de notre travail. Les concepts inhérents de ce qui constitue le caractère holistique de la sécurité ainsi que les éléments constitutifs d'un niveau de référence de sécurité sont définis en conséquence. Ceci permet d'identifier ceux que nous avons dénommés « les racines de confiance ». Le chapitre 3 présente et analyse la différence et les relations qui existent entre les processus de la Gestion des Risques et de la Gestion de la Sécurité, afin d'identifier les éléments constitutifs du cadre de protection à inclure dans notre modèle d'évaluation. Le chapitre 4 est consacré à la présentation de notre modèle d'évaluation Information Security Assurance Assessment Model (ISAAM) et la manière dont il répond aux exigences de l'évaluation telle que nous les avons préalablement présentées. Dans ce chapitre les concepts sous-jacents relatifs aux notions d'assurance et de confiance sont analysés. En se basant sur ces deux concepts, la structure du modèle d'évaluation est développée pour obtenir une plateforme qui offre un certain niveau de garantie en s'appuyant sur trois attributs d'évaluation, à savoir : « la structure de confiance », « la qualité du processus », et « la réalisation des exigences et des objectifs ». Les problématiques liées à chacun de ces attributs d'évaluation sont analysées en se basant sur l'état de l'art de la recherche et de la littérature, sur les différentes méthodes existantes ainsi que sur les normes et les standards les plus courants dans le domaine de la sécurité. Sur cette base, trois différents niveaux d'évaluation sont construits, à savoir : le niveau d'assurance, le niveau de qualité et le niveau de maturité qui constituent la base de l'évaluation de l'état global de la sécurité d'une organisation. La deuxième partie: « L'application du Modèle d'évaluation de l'assurance de la sécurité de l'information par domaine de sécurité » est elle aussi composée de quatre chapitres. Le modèle d'évaluation déjà construit et analysé est, dans cette partie, mis dans un contexte spécifique selon les quatre dimensions prédéfinies de sécurité qui sont: la dimension Organisationnelle, la dimension Fonctionnelle, la dimension Humaine, et la dimension Légale. Chacune de ces dimensions et son évaluation spécifique fait l'objet d'un chapitre distinct. Pour chacune des dimensions, une évaluation en deux phases est construite comme suit. La première phase concerne l'identification des éléments qui constituent la base de l'évaluation: ? Identification des éléments clés de l'évaluation ; ? Identification des « Focus Area » pour chaque dimension qui représentent les problématiques se trouvant dans la dimension ; ? Identification des « Specific Factors » pour chaque Focus Area qui représentent les mesures de sécurité et de contrôle qui contribuent à résoudre ou à diminuer les impacts des risques. La deuxième phase concerne l'évaluation de chaque dimension précédemment présentées. Elle est constituée d'une part, de l'implémentation du modèle général d'évaluation à la dimension concernée en : ? Se basant sur les éléments spécifiés lors de la première phase ; ? Identifiant les taches sécuritaires spécifiques, les processus, les procédures qui auraient dû être effectués pour atteindre le niveau de protection souhaité. D'autre part, l'évaluation de chaque dimension est complétée par la proposition d'un modèle de maturité spécifique à chaque dimension, qui est à considérer comme une base de référence pour le niveau global de sécurité. Pour chaque dimension nous proposons un modèle de maturité générique qui peut être utilisé par chaque organisation, afin de spécifier ses propres exigences en matière de sécurité. Cela constitue une innovation dans le domaine de l'évaluation, que nous justifions pour chaque dimension et dont nous mettons systématiquement en avant la plus value apportée. La troisième partie de notre document est relative à la validation globale de notre proposition et contient en guise de conclusion, une mise en perspective critique de notre travail et des remarques finales. Cette dernière partie est complétée par une bibliographie et des annexes. Notre modèle d'évaluation de la sécurité intègre et se base sur de nombreuses sources d'expertise, telles que les bonnes pratiques, les normes, les standards, les méthodes et l'expertise de la recherche scientifique du domaine. Notre proposition constructive répond à un véritable problème non encore résolu, auquel doivent faire face toutes les organisations, indépendamment de la taille et du profil. Cela permettrait à ces dernières de spécifier leurs exigences particulières en matière du niveau de sécurité à satisfaire, d'instancier un processus d'évaluation spécifique à leurs besoins afin qu'elles puissent s'assurer que leur sécurité de l'information soit gérée d'une manière appropriée, offrant ainsi un certain niveau de confiance dans le degré de protection fourni. Nous avons intégré dans notre modèle le meilleur du savoir faire, de l'expérience et de l'expertise disponible actuellement au niveau international, dans le but de fournir un modèle d'évaluation simple, générique et applicable à un grand nombre d'organisations publiques ou privées. La valeur ajoutée de notre modèle d'évaluation réside précisément dans le fait qu'il est suffisamment générique et facile à implémenter tout en apportant des réponses sur les besoins concrets des organisations. Ainsi notre proposition constitue un outil d'évaluation fiable, efficient et dynamique découlant d'une approche d'évaluation cohérente. De ce fait, notre système d'évaluation peut être implémenté à l'interne par l'entreprise elle-même, sans recourir à des ressources supplémentaires et lui donne également ainsi la possibilité de mieux gouverner sa sécurité de l'information.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

According to molecular epidemiology theory, two isolates belong to the same chain of transmission if they are similar according to a highly discriminatory molecular typing method. This has been demonstrated in outbreaks, but is rarely studied in endemic situations. Person-to-person transmission cannot be established when isolates of meticillin-resistant Staphylococcus aureus (MRSA) belong to endemically predominant genotypes. By contrast, isolates of infrequent genotypes might be more suitable for epidemiological tracking. The objective of the present study was to determine, in newly identified patients harbouring non-predominant MRSA genotypes, whether putative epidemiological links inferred from molecular typing could replace classical epidemiology in the context of a regional surveillance programme. MRSA genotypes were defined using double-locus sequence typing (DLST) combining clfB and spa genes. A total of 1,268 non-repetitive MRSA isolates recovered between 2005 and 2006 in Western Switzerland were typed: 897 isolates (71%) belonged to four predominant genotypes, 231 (18%) to 55 non-predominant genotypes, and 140 (11%) were unique. Obvious epidemiological links were found in only 106/231 (46%) patients carrying isolates with non-predominant genotypes suggesting that molecular surveillance identified twice as many clusters as those that may have been suspected with classical epidemiological links. However, not all of these molecular clusters represented person-to-person transmission. Thus, molecular typing cannot replace classical epidemiology but is complementary. A prospective surveillance of MRSA genotypes could help to target epidemiological tracking in order to recognise new risk factors in hospital and community settings, or emergence of new epidemic clones.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

ABSTRACT (English)An accurate processing of the order between sensory events at the millisecond time scale is crucial for both sensori-motor and cognitive functions. Temporal order judgment (TOJ) tasks, is the ability of discriminating the order of presentation of several stimuli presented in a rapid succession. The aim of the present thesis is to further investigate the spatio-temporal brain mechanisms supporting TOJ. In three studies we focus on the dependency of TOJ accuracy on the brain states preceding the presentation of TOJ stimuli, the neural correlates of accurate vs. inaccurate TOJ and whether and how TOJ performance can be improved with training.In "Pre-stimulus beta oscillations within left posterior sylvian regions impact auditory temporal order judgment accuracy" (Bernasconi et al., 2011), we investigated if the brain activity immediately preceding the presentation of the stimuli modulates TOJ performance. By contrasting the electrophysiological activity before the stimulus presentation as a function of TOJ accuracy we observed a stronger pre-stimulus beta (20Hz) oscillatory activity within the left posterior sylvian region (PSR) before accurate than inaccurate TOJ trials.In "Interhemispheric coupling between the posterior sylvian regions impacts successful auditory temporal order judgment" (Bernasconi et al., 2010a), and "Plastic brain mechanisms for attaining auditory temporal order judgment proficiency" (Bernasconi et al., 2010b), we investigated the spatio-temporal brain dynamics underlying auditory TOJ. In both studies we observed a topographic modulation as a function of TOJ performance at ~40ms after the onset of the first sound, indicating the engagement of distinct configurations of intracranial generators. Source estimations in the first study revealed a bilateral PSR activity for both accurate and inaccurate TOJ trials. Moreover, activity within left, but not right, PSR correlated with TOJ performance. Source estimations in the second study revealed a training-induced left lateralization of the initial bilateral (i.e. PSR) brain response. Moreover, the activity within the left PSR region correlated with TOJ performance.Based on these results, we suggest that a "temporal stamp" is established within left PSR on the first sound within the pair at early stages (i.e. ~40ms) of cortical processes, but is critically modulated by inputs from right PSR (Bernasconi et al., 2010a; b). The "temporal stamp" on the first sound may be established via a sensory gating or prior entry mechanism.Behavioral and brain responses to identical stimuli can vary due to attention modulation, vary with experimental and task parameters or "internal noise". In a fourth experiment (Bernasconi et al., 2011b) we investigated where and when "neural noise" manifest during the stimulus processing. Contrasting the AEPs of identical sound perceived as High vs. Low pitch, a topographic modulation occurred at ca. 100ms after the onset of the sound. Source estimation revealed activity within regions compatible with pitch discrimination. Thus, we provided neurophysiological evidence for the variation in perception induced by "neural noise".ABSTRACT (French)Un traitement précis de l'ordre des événements sensoriels sur une échelle de temps de milliseconde est crucial pour les fonctions sensori-motrices et cognitives. Les tâches de jugement d'ordre temporel (JOT), consistant à présenter plusieurs stimuli en succession rapide, sont traditionnellement employées pour étudier les mécanismes neuronaux soutenant le traitement d'informations sensorielles qui varient rapidement. Le but de cette thèse est d'étudier le mécanisme cérébral soutenant JOT. Dans les trois études présentées nous nous sommes concentrés sur les états du cerveau précédant la présentation des stimuli de JOT, les bases neurales pour un JOT correct vs. incorrect et sur la possibilité et les moyens d'améliorer l'exécution du JOT grâce à un entraînement.Dans "Pre-stimulus beta oscillations within left posterior sylvian regions impact auditory temporal order judgment accuracy" (Bernasconi et al., 2011),, nous nous sommes intéressé à savoir si l'activité oscillatoire du cerveau au pré-stimulus modulait la performance du JOT. Nous avons contrasté l'activité électrophysiologique en fonction de la performance TOJ, mesurant une activité oscillatoire beta au pré-stimulus plus fort dans la région sylvian postérieure gauche (PSR) liée à un JOT correct.Dans "Interhemispheric coupling between the posterior sylvian regions impacts successful auditory temporal order judgment" (Bernasconi et al., 2010a), et "Plastic brain mechanisms for attaining auditory temporal order judgment proficiency" (Bernasconi et al., 2010b), nous avons étudié la dynamique spatio-temporelle dans le cerveau impliqué dans le traitement du JOT auditif. Dans ses deux études, nous avons observé une modulation topographique à ~40ms après le début du premier son, en fonction de la performance JOT, indiquant l'engagement des configurations de générateurs intra- crâniens distincts. La localisation de source dans la première étude indique une activité bilatérale de PSR pour des JOT corrects vs. incorrects. Par ailleurs, l'activité dans PSR gauche, mais pas dans le droit, est corrélée avec la performance du JOT. La localisation de source dans la deuxième étude indiquait une latéralisation gauche induite par l'entraînement d'une réponse initialement bilatérale du cerveau. D'ailleurs, l'activité dans la région PSR gauche corrèlait avec la performance de TOJ.Basé sur ces résultats, nous proposons qu'un « timbre-temporel » soit établi très tôt (c.-à-d. à ~40ms) sur le premier son par le PSR gauche, mais module par l'activité du PSR droite (Bernasconi et al., 2010a ; b). « Le timbre- temporel » sur le premier son peut être établi par le mécanisme neuronal de type « sensory gating » ou « prior entry ».Les réponses comportementales et du cerveau aux stimuli identiques peut varier du à des modulations d'attention ou à des variations dans les paramètres des tâches ou au bruit interne du cerveau. Dans une quatrième expérience (Bernasconi et al. 2011B), nous avons étudié où et quand le »bruit neuronal« se manifeste pendant le traitement des stimuli. En contrastant les AEPs de sons identiques perçus comme aigus vs. grave, nous avons mesuré une modulation topographique à env. 100ms après l'apparition du son. L'estimation de source a révélé une activité dans les régions compatibles avec la discrimination de fréquences. Ainsi, nous avons fourni des preuves neurophysiologiques de la variation de la perception induite par le «bruit neuronal».

Relevância:

100.00% 100.00%

Publicador:

Resumo:

1 Summary This dissertation deals with two major aspects of corporate governance that grew in importance during the last years: the internal audit function and financial accounting education. In three essays, I contribute to research on these topics which are embedded in the broader corporate governance literature. The first two essays consist of experimental investigations of internal auditors' judgments. They deal with two research issues for which accounting research lacks evidence: The effectiveness of internal controls and the potentially conflicting role of the internal audit function between management and the audit committee. The findings of the first two essays contribute to the literature on internal auditors' judgment and the role of the internal audit function as a major cornerstone of corporate governance. The third essay theoretically examines a broader issue but also relates to the overall research question of this dissertation: What contributes to effective corporate governance? This last essay takes the perspective that the root for quality corporate governance is appropriate financial accounting education. r develop a public interest approach to accounting education that contributes to the literature on adequate accounting education with respect to corporate governance and accounting harmonization. The increasing importance of both the internal audit function and accounting education for corporate governance can be explained by the same recent fundamental changes that still affect accounting research and practice. First, the Sarbanes-Oxley Act of 2002 (SOX, 2002) and the 8th EU Directive (EU, 2006) have led to a bigger role for the internal audit function in corporate governance. Their implications regarding the implementation of audit committees and their oversight over internal controls are extensive. As a consequence, the internal audit function has become increasingly important for corporate governance and serves a new master (i.e. the audit committee) within the company in addition to management. Second, the SOX (2002) and the 8th EU Directive introduced additional internal control mechanisms that are expected to contribute to the reliability of financial information. As a consequence, the internal audit function is expected to contribute to a greater extent to the reliability of financial statements. Therefore, effective internal control mechanisms that strengthen objective judgments and independence become important. This is especially true when external- auditors rely on the work of internal auditors in the context of the International Standard on Auditing (ISA) 610 and the equivalent US Statement on Auditing Standards (SAS) 65 (see IFAC, 2009 and AICPA, 1990). Third, the harmonization of international reporting standards is increasingly promoted by means of a principles-based approach. It is the leading approach since a study of the SEC (2003) that was required by the SOX (2002) in section 108(d) was in favor of this approach. As a result, the Financial Accounting Standards Board (FASB) and the International Accounting Standards Board (IASB) commit themselves to the development of compatible accounting standards based on a principles-based approach. Moreover, since the Norwalk Agreement of 2002, the two standard setters have developed exposure drafts for a common conceptual framework that will be the basis for accounting harmonization. The new .framework will be in favor of fair value measurement and accounting for real-world economic phenomena. These changes in terms of standard setting lead to a trend towards more professional judgment in the accounting process. They affect internal and external auditors, accountants, and managers in general. As a consequence, a new competency set for preparers and users of financial statements is required. The basil for this new competency set is adequate accounting education (Schipper, 2003). These three issues which affect corporate governance are the initial point of this dissertation and constitute its motivation. Two broad questions motivated a scientific examination in three essays: 1) What are major aspects to be examined regarding the new role of the internal audit function? 2) How should major changes in standard setting affect financial accounting education? The first question became apparent due to two published literature reviews by Gramling et al. (2004) and Cohen, Krishnamoorthy & Wright (2004). These studies raise various questions for future research that are still relevant and which motivate the first two essays of my dissertation. In the first essay, I focus on the role of the internal audit function as one cornerstone of corporate governance and its potentially conflicting role of serving both management and the audit committee (IIA, 2003). In an experimental study, I provide evidence on the challenges for internal auditors in their role as servant for two masters -the audit committee and management -and how this influences internal auditors' judgment (Gramling et al. 2004; Cohen, Krishnamoorthy & Wright, 2004). I ask if there is an expectation gap between what internal auditors should provide for corporate governance in theory compared to what internal auditors are able to provide in practice. In particular, I focus on the effect of serving two masters on the internal auditor's independence. I argue that independence is hardly achievable if the internal audit function serves two masters with conflicting priorities. The second essay provides evidence on the effectiveness of accountability as an internal control mechanism. In general, internal control mechanisms based on accountability were enforced by the SOX (2002) and the 8th EU Directive. Subsequently, many companies introduced sub-certification processes that should contribute to an objective judgment process. Thus, these mechanisms are important to strengthen the reliability of financial statements. Based on a need for evidence on the effectiveness of internal control mechanisms (Brennan & Solomon, 2008; Gramling et al. 2004; Cohen, Krishnamoorthy & Wright, 2004; Solomon & Trotman, 2003), I designed an experiment to examine the joint effect of accountability and obedience pressure in an internal audit setting. I argue that obedience pressure potentially can lead to a negative influence on accountants' objectivity (e.g. DeZoort & Lord, 1997) whereas accountability can mitigate this negative effect. My second main research question - How should major changes in standard setting affect financial accounting education? - is investigated in the third essay. It is motivated by the observation during my PhD that many conferences deal with the topic of accounting education but very little is published about what needs to be done. Moreover, the Endings in the first two essays of this thesis and their literature review suggest that financial accounting education can contribute significantly to quality corporate governance as argued elsewhere (Schipper, 2003; Boyce, 2004; Ghoshal, 2005). In the third essay of this thesis, I therefore focus on approaches to financial accounting education that account for the changes in standard setting and also contribute to corporate governance and accounting harmonization. I argue that the competency set that is required in practice changes due to major changes in standard setting. As the major contribution of the third article, I develop a public interest approach for financial accounting education. The major findings of this dissertation can be summarized as follows. The first essay provides evidence to an important research question raised by Gramling et al. (2004, p. 240): "If the audit committee and management have different visions for the corporate governance role of the IAF, which vision will dominate?" According to the results of the first essay, internal auditors do follow the priorities of either management or the audit committee based on the guidance provided by the Chief Audit executive. The study's results question whether the independence of the internal audit function is actually achievable. My findings contribute to research on internal auditors' judgment and the internal audit function's independence in the broader frame of corporate governance. The results are also important for practice because independence is a major justification for a positive contribution of the internal audit function to corporate governance. The major findings of the second essay indicate that the duty to sign work results - a means of holding people accountable -mitigates the negative effect of obedience pressure on reliability. Hence, I found evidence that control .mechanisms relying on certifications may enhance the reliability of financial information. These findings contribute to the literature on the effectiveness of internal control mechanisms. They are also important in the light of sub-certification processes that resulted from the Sarbanes-Oxley Act and the 8th EU Directive. The third essay contributes to the literature by developing a measurement framework that accounts for the consequences of major trends in standard setting. Moreovér, it shows how these trends affect the required .competency set of people dealing with accounting issues. Based on this work, my main contribution is the development of a public interest approach for the design of adequate financial accounting curricula. 2 Serving two masters: Experimental evidence on the independence of internal auditors Abstract Twenty nine internal auditors participated in a study that examines the independence of internal auditors in their potentially competing roles of serving two masters: the audit committee and management. Our main hypothesis suggests that internal auditors' independence is not achievable in an institutional setting in which internal auditors are accountable to two different parties with potentially differing priorities. We test our hypothesis in an experiment in which the treatment consisted of two different instructions of the Chief audit executive; one stressing the priority of management (cost reduction) and one stressing the priority of the audit committee (effectiveness). Internal auditors had to evaluate internal controls and their inherent costs of different processes which varied in their degree of task complexity. Our main results indicate that internal auditors' evaluation of the processes is significantly different when task complexity is high. Our findings suggest that internal auditors do follow the priorities of either management or the audit committee depending on the instructions of a superior internal auditor. The study's results question whether the independence of the internal audit function is actually achievable. With our findings, we contribute to research on internal auditors' judgment and the internal audit function's independence in the frame of corporate governance.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In this study, a model for the unsteady dynamic behaviour of a once-through counter flow boiler that uses an organic working fluid is presented. The boiler is a compact waste-heat boiler without a furnace and it has a preheater, a vaporiser and a superheater. The relative lengths of the boiler parts vary with the operating conditions since they are all parts of a single tube. The present research is a part of a study on the unsteady dynamics of an organic Rankine cycle power plant and it will be a part of a dynamic process model. The boiler model is presented using a selected example case that uses toluene as the process fluid and flue gas from natural gas combustion as the heat source. The dynamic behaviour of the boiler means transition from the steady initial state towards another steady state that corresponds to the changed process conditions. The solution method chosen was to find such a pressure of the process fluid that the mass of the process fluid in the boiler equals the mass calculated using the mass flows into and out of the boiler during a time step, using the finite difference method. A special method of fast calculation of the thermal properties has been used, because most of the calculation time is spent in calculating the fluid properties. The boiler was divided into elements. The values of the thermodynamic properties and mass flows were calculated in the nodes that connect the elements. Dynamic behaviour was limited to the process fluid and tube wall, and the heat source was regarded as to be steady. The elements that connect the preheater to thevaporiser and the vaporiser to the superheater were treated in a special way that takes into account a flexible change from one part to the other. The model consists of the calculation of the steady state initial distribution of the variables in the nodes, and the calculation of these nodal values in a dynamic state. The initial state of the boiler was received from a steady process model that isnot a part of the boiler model. The known boundary values that may vary during the dynamic calculation were the inlet temperature and mass flow rates of both the heat source and the process fluid. A brief examination of the oscillation around a steady state, the so-called Ledinegg instability, was done. This examination showed that the pressure drop in the boiler is a third degree polynomial of the mass flow rate, and the stability criterion is a second degree polynomial of the enthalpy change in the preheater. The numerical examination showed that oscillations did not exist in the example case. The dynamic boiler model was analysed for linear and step changes of the entering fluid temperatures and flow rates.The problem for verifying the correctness of the achieved results was that there was no possibility o compare them with measurements. This is why the only way was to determine whether the obtained results were intuitively reasonable and the results changed logically when the boundary conditions were changed. The numerical stability was checked in a test run in which there was no change in input values. The differences compared with the initial values were so small that the effects of numerical oscillations were negligible. The heat source side tests showed that the model gives results that are logical in the directions of the changes, and the order of magnitude of the timescale of changes is also as expected. The results of the tests on the process fluid side showed that the model gives reasonable results both on the temperature changes that cause small alterations in the process state and on mass flow rate changes causing very great alterations. The test runs showed that the dynamic model has no problems in calculating cases in which temperature of the entering heat source suddenly goes below that of the tube wall or the process fluid.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The patent system was created for the purpose of promoting innovation by granting the inventors a legally defined right to exclude others in return for public disclosure. Today, patents are being applied and granted in greater numbers than ever, particularly in new areas such as biotechnology and information andcommunications technology (ICT), in which research and development (R&D) investments are also high. At the same time, the patent system has been heavily criticized. It has been claimed that it discourages rather than encourages the introduction of new products and processes, particularly in areas that develop quickly, lack one-product-one-patent correlation, and in which theemergence of patent thickets is characteristic. A further concern, which is particularly acute in the U.S., is the granting of so-called 'bad patents', i.e. patents that do not factually fulfil the patentability criteria. From the perspective of technology-intensive companies, patents could,irrespective of the above, be described as the most significant intellectual property right (IPR), having the potential of being used to protect products and processes from imitation, to limit competitors' freedom-to-operate, to provide such freedom to the company in question, and to exchange ideas with others. In fact, patents define the boundaries of ownership in relation to certain technologies. They may be sold or licensed on their ownor they may be components of all sorts of technology acquisition and licensing arrangements. Moreover, with the possibility of patenting business-method inventions in the U.S., patents are becoming increasingly important for companies basing their businesses on services. The value of patents is dependent on the value of the invention it claims, and how it is commercialized. Thus, most of them are worth very little, and most inventions are not worth patenting: it may be possible to protect them in other ways, and the costs of protection may exceed the benefits. Moreover, instead of making all inventions proprietary and seeking to appropriate as highreturns on investments as possible through patent enforcement, it is sometimes better to allow some of them to be disseminated freely in order to maximize market penetration. In fact, the ideology of openness is well established in the software sector, which has been the breeding ground for the open-source movement, for instance. Furthermore, industries, such as ICT, that benefit from network effects do not shun the idea of setting open standards or opening up their proprietary interfaces to allow everyone todesign products and services that are interoperable with theirs. The problem is that even though patents do not, strictly speaking, prevent access to protected technologies, they have the potential of doing so, and conflicts of interest are not rare. The primary aim of this dissertation is to increase understanding of the dynamics and controversies of the U.S. and European patent systems, with the focus on the ICT sector. The study consists of three parts. The first part introduces the research topic and the overall results of the dissertation. The second part comprises a publication in which academic, political, legal and business developments that concern software and business-method patents are investigated, and contentiousareas are identified. The third part examines the problems with patents and open standards both of which carry significant economic weight inthe ICT sector. Here, the focus is on so-called submarine patents, i.e. patentsthat remain unnoticed during the standardization process and then emerge after the standard has been set. The factors that contribute to the problems are documented and the practical and juridical options for alleviating them are assessed. In total, the dissertation provides a good overview of the challenges and pressures for change the patent system is facing,and of how these challenges are reflected in standard setting.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This thesis is composed of three main parts. The first consists of a state of the art of the different notions that are significant to understand the elements surrounding art authentication in general, and of signatures in particular, and that the author deemed them necessary to fully grasp the microcosm that makes up this particular market. Individuals with a solid knowledge of the art and expertise area, and that are particularly interested in the present study are advised to advance directly to the fourth Chapter. The expertise of the signature, it's reliability, and the factors impacting the expert's conclusions are brought forward. The final aim of the state of the art is to offer a general list of recommendations based on an exhaustive review of the current literature and given in light of all of the exposed issues. These guidelines are specifically formulated for the expertise of signatures on paintings, but can also be applied to wider themes in the area of signature examination. The second part of this thesis covers the experimental stages of the research. It consists of the method developed to authenticate painted signatures on works of art. This method is articulated around several main objectives: defining measurable features on painted signatures and defining their relevance in order to establish the separation capacities between groups of authentic and simulated signatures. For the first time, numerical analyses of painted signatures have been obtained and are used to attribute their authorship to given artists. An in-depth discussion of the developed method constitutes the third and final part of this study. It evaluates the opportunities and constraints when applied by signature and handwriting experts in forensic science. A brief summary covering each chapter allows a rapid overview of the study and summarizes the aims and main themes of each chapter. These outlines presented below summarize the aims and main themes addressed in each chapter. Part I - Theory Chapter 1 exposes legal aspects surrounding the authentication of works of art by art experts. The definition of what is legally authentic, the quality and types of the experts that can express an opinion concerning the authorship of a specific painting, and standard deontological rules are addressed. The practices applied in Switzerland will be specifically dealt with. Chapter 2 presents an overview of the different scientific analyses that can be carried out on paintings (from the canvas to the top coat). Scientific examinations of works of art have become more common, as more and more museums equip themselves with laboratories, thus an understanding of their role in the art authentication process is vital. The added value that a signature expertise can have in comparison to other scientific techniques is also addressed. Chapter 3 provides a historical overview of the signature on paintings throughout the ages, in order to offer the reader an understanding of the origin of the signature on works of art and its evolution through time. An explanation is given on the transitions that the signature went through from the 15th century on and how it progressively took on its widely known modern form. Both this chapter and chapter 2 are presented to show the reader the rich sources of information that can be provided to describe a painting, and how the signature is one of these sources. Chapter 4 focuses on the different hypotheses the FHE must keep in mind when examining a painted signature, since a number of scenarios can be encountered when dealing with signatures on works of art. The different forms of signatures, as well as the variables that may have an influence on the painted signatures, are also presented. Finally, the current state of knowledge of the examination procedure of signatures in forensic science in general, and in particular for painted signatures, is exposed. The state of the art of the assessment of the authorship of signatures on paintings is established and discussed in light of the theoretical facets mentioned previously. Chapter 5 considers key elements that can have an impact on the FHE during his or her2 examinations. This includes a discussion on elements such as the skill, confidence and competence of an expert, as well as the potential bias effects he might encounter. A better understanding of elements surrounding handwriting examinations, to, in turn, better communicate results and conclusions to an audience, is also undertaken. Chapter 6 reviews the judicial acceptance of signature analysis in Courts and closes the state of the art section of this thesis. This chapter brings forward the current issues pertaining to the appreciation of this expertise by the non- forensic community, and will discuss the increasing number of claims of the unscientific nature of signature authentication. The necessity to aim for more scientific, comprehensive and transparent authentication methods will be discussed. The theoretical part of this thesis is concluded by a series of general recommendations for forensic handwriting examiners in forensic science, specifically for the expertise of signatures on paintings. These recommendations stem from the exhaustive review of the literature and the issues exposed from this review and can also be applied to the traditional examination of signatures (on paper). Part II - Experimental part Chapter 7 describes and defines the sampling, extraction and analysis phases of the research. The sampling stage of artists' signatures and their respective simulations are presented, followed by the steps that were undertaken to extract and determine sets of characteristics, specific to each artist, that describe their signatures. The method is based on a study of five artists and a group of individuals acting as forgers for the sake of this study. Finally, the analysis procedure of these characteristics to assess of the strength of evidence, and based on a Bayesian reasoning process, is presented. Chapter 8 outlines the results concerning both the artist and simulation corpuses after their optical observation, followed by the results of the analysis phase of the research. The feature selection process and the likelihood ratio evaluation are the main themes that are addressed. The discrimination power between both corpuses is illustrated through multivariate analysis. Part III - Discussion Chapter 9 discusses the materials, the methods, and the obtained results of the research. The opportunities, but also constraints and limits, of the developed method are exposed. Future works that can be carried out subsequent to the results of the study are also presented. Chapter 10, the last chapter of this thesis, proposes a strategy to incorporate the model developed in the last chapters into the traditional signature expertise procedures. Thus, the strength of this expertise is discussed in conjunction with the traditional conclusions reached by forensic handwriting examiners in forensic science. Finally, this chapter summarizes and advocates a list of formal recommendations for good practices for handwriting examiners. In conclusion, the research highlights the interdisciplinary aspect of signature examination of signatures on paintings. The current state of knowledge of the judicial quality of art experts, along with the scientific and historical analysis of paintings and signatures, are overviewed to give the reader a feel of the different factors that have an impact on this particular subject. The temperamental acceptance of forensic signature analysis in court, also presented in the state of the art, explicitly demonstrates the necessity of a better recognition of signature expertise by courts of law. This general acceptance, however, can only be achieved by producing high quality results through a well-defined examination process. This research offers an original approach to attribute a painted signature to a certain artist: for the first time, a probabilistic model used to measure the discriminative potential between authentic and simulated painted signatures is studied. The opportunities and limits that lie within this method of scientifically establishing the authorship of signatures on works of art are thus presented. In addition, the second key contribution of this work proposes a procedure to combine the developed method into that used traditionally signature experts in forensic science. Such an implementation into the holistic traditional signature examination casework is a large step providing the forensic, judicial and art communities with a solid-based reasoning framework for the examination of signatures on paintings. The framework and preliminary results associated with this research have been published (Montani, 2009a) and presented at international forensic science conferences (Montani, 2009b; Montani, 2012).

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Työn tavoitteena oli mallintaa uuden tuoteominaisuuden aiheuttamat lisäkustannukset ja suunnitella päätöksenteon työkalu Timberjack Oy:n kuormatraktorivalmistuksen johtoryhmälle. Tarkoituksena oli luoda karkean tason malli, joka sopisi eri tyyppisten tuoteominaisuuksien kustannuksien selvittämiseen. Uuden tuoteominaisuuden vaikutusta yrityksen eri toimintoihin selvitettiin haastatteluin. Haastattelukierroksen tukena käytettiin kysymyslomaketta. Haastattelujen tavoitteena oli selvittää prosessit, toiminnot ja resurssit, jotka ovat välttämättömiä uuden tuoteominaisuuden tuotantoon saattamisessa ja tuotannossa. Malli suunniteltiin haastattelujen ja tietojärjestelmästä hankitun tiedon pohjalta. Mallin rungon muodostivat ne prosessit ja toiminnot, joihin uudella tuoteominaisuudella on vaikutusta. Huomioon otettiin sellaiset resurssit, joita uusi tuoteominaisuus kuluttaa joko välittömästi, tai välillisesti. Tarkasteluun sisällytettiin ainoastaan lisäkustannukset. Uuden tuoteominaisuuden toteuttamisesta riippumattomat, joka tapauksessa toteutuvat yleiskustannukset jätettiin huomioimatta. Malli on yleistys uuden tuoteominaisuuden aiheuttamista lisäkustannuksista, koska tarkoituksena on, että se sopii eri tyyppisten tuoteominaisuuksien aiheuttamien kustannusten selvittämiseen. Lisäksi malli soveltuu muiden pienehköjen tuotemuutosten kustannusten kartoittamiseen.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Cooperation and coordination are desirable behaviors that are fundamental for the harmonious development of society. People need to rely on cooperation with other individuals in many aspects of everyday life, such as teamwork and economic exchange in anonymous markets. However, cooperation may easily fall prey to exploitation by selfish individuals who only care about short- term gain. For cooperation to evolve, specific conditions and mechanisms are required, such as kinship, direct and indirect reciprocity through repeated interactions, or external interventions such as punishment. In this dissertation we investigate the effect of the network structure of the population on the evolution of cooperation and coordination. We consider several kinds of static and dynamical network topologies, such as Baraba´si-Albert, social network models and spatial networks. We perform numerical simulations and laboratory experiments using the Prisoner's Dilemma and co- ordination games in order to contrast human behavior with theoretical results. We show by numerical simulations that even a moderate amount of random noise on the Baraba´si-Albert scale-free network links causes a significant loss of cooperation, to the point that cooperation almost vanishes altogether in the Prisoner's Dilemma when the noise rate is high enough. Moreover, when we consider fixed social-like networks we find that current models of social networks may allow cooperation to emerge and to be robust at least as much as in scale-free networks. In the framework of spatial networks, we investigate whether cooperation can evolve and be stable when agents move randomly or performing Le´vy flights in a continuous space. We also consider discrete space adopting purposeful mobility and binary birth-death process to dis- cover emergent cooperative patterns. The fundamental result is that cooperation may be enhanced when this migration is opportunistic or even when agents follow very simple heuristics. In the experimental laboratory, we investigate the issue of social coordination between indi- viduals located on networks of contacts. In contrast to simulations, we find that human players dynamics do not converge to the efficient outcome more often in a social-like network than in a random network. In another experiment, we study the behavior of people who play a pure co- ordination game in a spatial environment in which they can move around and when changing convention is costly. We find that each convention forms homogeneous clusters and is adopted by approximately half of the individuals. When we provide them with global information, i.e., the number of subjects currently adopting one of the conventions, global consensus is reached in most, but not all, cases. Our results allow us to extract the heuristics used by the participants and to build a numerical simulation model that agrees very well with the experiments. Our findings have important implications for policymakers intending to promote specific, desired behaviors in a mobile population. Furthermore, we carry out an experiment with human subjects playing the Prisoner's Dilemma game in a diluted grid where people are able to move around. In contrast to previous results on purposeful rewiring in relational networks, we find no noticeable effect of mobility in space on the level of cooperation. Clusters of cooperators form momentarily but in a few rounds they dissolve as cooperators at the boundaries stop tolerating being cheated upon. Our results highlight the difficulties that mobile agents have to establish a cooperative environment in a spatial setting without a device such as reputation or the possibility of retaliation. i.e. punishment. Finally, we test experimentally the evolution of cooperation in social networks taking into ac- count a setting where we allow people to make or break links at their will. In this work we give particular attention to whether information on an individual's actions is freely available to poten- tial partners or not. Studying the role of information is relevant as information on other people's actions is often not available for free: a recruiting firm may need to call a job candidate's refer- ences, a bank may need to find out about the credit history of a new client, etc. We find that people cooperate almost fully when information on their actions is freely available to their potential part- ners. Cooperation is less likely, however, if people have to pay about half of what they gain from cooperating with a cooperator. Cooperation declines even further if people have to pay a cost that is almost equivalent to the gain from cooperating with a cooperator. Thus, costly information on potential neighbors' actions can undermine the incentive to cooperate in dynamical networks.