27 resultados para Technologies of power

em Université de Lausanne, Switzerland


Relevância:

100.00% 100.00%

Publicador:

Resumo:

Switzerland appears to be a privileged place to investigate the urban political ecology of tap water because of the specificities of its political culture and organization and the relative abundance of drinking water in the country. In this paper, we refer to a Foucauldian theorization of power that is increasingly employed in the social sciences, including in human geography and political ecology. We also implement a Foucauldian methodology. In particular, we propose an archaeo-genealogical analysis of discourse to apprehend the links between urban water and the forms of governmentality in Switzerland between 1850 and 1950. Results show that two forms of governmentality, namely biopower and neoliberal governmentality, were present in the water sector in the selected period. Nonetheless, they deviate from the models proposed by Foucault, as their periodization and the classification of the technologies of power related to them prove to be much more blurred than Foucault's work, mainly based on France, might have suggested.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

We have devised a program that allows computation of the power of F-test, and hence determination of appropriate sample and subsample sizes, in the context of the one-way hierarchical analysis of variance with fixed effects. The power at a fixed alternative is an increasing function of the sample size and of the subsample size. The program makes it easy to obtain the power of F-test for a range of values of sample and subsample sizes, and therefore the appropriate sizes based on a desired power. The program can be used for the 'ordinary' case of the one-way analysis of variance, as well as for hierarchical analysis of variance with two stages of sampling. Examples are given of the practical use of the program.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

PURPOSE: To use measurement by cycling power meters (Pmes) to evaluate the accuracy of commonly used models for estimating uphill cycling power (Pest). Experiments were designed to explore the influence of wind speed and steepness of climb on accuracy of Pest. The authors hypothesized that the random error in Pest would be largely influenced by the windy conditions, the bias would be diminished in steeper climbs, and windy conditions would induce larger bias in Pest. METHODS: Sixteen well-trained cyclists performed 15 uphill-cycling trials (range: length 1.3-6.3 km, slope 4.4-10.7%) in a random order. Trials included different riding position in a group (lead or follow) and different wind speeds. Pmes was quantified using a power meter, and Pest was calculated with a methodology used by journalists reporting on the Tour de France. RESULTS: Overall, the difference between Pmes and Pest was -0.95% (95%CI: -10.4%, +8.5%) for all trials and 0.24% (-6.1%, +6.6%) in conditions without wind (<2 m/s). The relationship between percent slope and the error between Pest and Pmes were considered trivial. CONCLUSIONS: Aerodynamic drag (affected by wind velocity and orientation, frontal area, drafting, and speed) is the most confounding factor. The mean estimated values are close to the power-output values measured by power meters, but the random error is between ±6% and ±10%. Moreover, at the power outputs (>400 W) produced by professional riders, this error is likely to be higher. This observation calls into question the validity of releasing individual values without reporting the range of random errors.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The motivation for this research initiated from the abrupt rise and fall of minicomputers which were initially used both for industrial automation and business applications due to their significantly lower cost than their predecessors, the mainframes. Later industrial automation developed its own vertically integrated hardware and software to address the application needs of uninterrupted operations, real-time control and resilience to harsh environmental conditions. This has led to the creation of an independent industry, namely industrial automation used in PLC, DCS, SCADA and robot control systems. This industry employs today over 200'000 people in a profitable slow clockspeed context in contrast to the two mainstream computing industries of information technology (IT) focused on business applications and telecommunications focused on communications networks and hand-held devices. Already in 1990s it was foreseen that IT and communication would merge into one Information and communication industry (ICT). The fundamental question of the thesis is: Could industrial automation leverage a common technology platform with the newly formed ICT industry? Computer systems dominated by complex instruction set computers (CISC) were challenged during 1990s with higher performance reduced instruction set computers (RISC). RISC started to evolve parallel to the constant advancement of Moore's law. These developments created the high performance and low energy consumption System-on-Chip architecture (SoC). Unlike to the CISC processors RISC processor architecture is a separate industry from the RISC chip manufacturing industry. It also has several hardware independent software platforms consisting of integrated operating system, development environment, user interface and application market which enables customers to have more choices due to hardware independent real time capable software applications. An architecture disruption merged and the smartphone and tablet market were formed with new rules and new key players in the ICT industry. Today there are more RISC computer systems running Linux (or other Unix variants) than any other computer system. The astonishing rise of SoC based technologies and related software platforms in smartphones created in unit terms the largest installed base ever seen in the history of computers and is now being further extended by tablets. An underlying additional element of this transition is the increasing role of open source technologies both in software and hardware. This has driven the microprocessor based personal computer industry with few dominating closed operating system platforms into a steep decline. A significant factor in this process has been the separation of processor architecture and processor chip production and operating systems and application development platforms merger into integrated software platforms with proprietary application markets. Furthermore the pay-by-click marketing has changed the way applications development is compensated: Three essays on major trends in a slow clockspeed industry: The case of industrial automation 2014 freeware, ad based or licensed - all at a lower price and used by a wider customer base than ever before. Moreover, the concept of software maintenance contract is very remote in the app world. However, as a slow clockspeed industry, industrial automation has remained intact during the disruptions based on SoC and related software platforms in the ICT industries. Industrial automation incumbents continue to supply systems based on vertically integrated systems consisting of proprietary software and proprietary mainly microprocessor based hardware. They enjoy admirable profitability levels on a very narrow customer base due to strong technology-enabled customer lock-in and customers' high risk leverage as their production is dependent on fault-free operation of the industrial automation systems. When will this balance of power be disrupted? The thesis suggests how industrial automation could join the mainstream ICT industry and create an information, communication and automation (ICAT) industry. Lately the Internet of Things (loT) and weightless networks, a new standard leveraging frequency channels earlier occupied by TV broadcasting, have gradually started to change the rigid world of Machine to Machine (M2M) interaction. It is foreseeable that enough momentum will be created that the industrial automation market will in due course face an architecture disruption empowered by these new trends. This thesis examines the current state of industrial automation subject to the competition between the incumbents firstly through a research on cost competitiveness efforts in captive outsourcing of engineering, research and development and secondly researching process re- engineering in the case of complex system global software support. Thirdly we investigate the industry actors', namely customers, incumbents and newcomers, views on the future direction of industrial automation and conclude with our assessments of the possible routes industrial automation could advance taking into account the looming rise of the Internet of Things (loT) and weightless networks. Industrial automation is an industry dominated by a handful of global players each of them focusing on maintaining their own proprietary solutions. The rise of de facto standards like IBM PC, Unix and Linux and SoC leveraged by IBM, Compaq, Dell, HP, ARM, Apple, Google, Samsung and others have created new markets of personal computers, smartphone and tablets and will eventually also impact industrial automation through game changing commoditization and related control point and business model changes. This trend will inevitably continue, but the transition to a commoditized industrial automation will not happen in the near future.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Positive selection is widely estimated from protein coding sequence alignments by the nonsynonymous-to-synonymous ratio omega. Increasingly elaborate codon models are used in a likelihood framework for this estimation. Although there is widespread concern about the robustness of the estimation of the omega ratio, more efforts are needed to estimate this robustness, especially in the context of complex models. Here, we focused on the branch-site codon model. We investigated its robustness on a large set of simulated data. First, we investigated the impact of sequence divergence. We found evidence of underestimation of the synonymous substitution rate for values as small as 0.5, with a slight increase in false positives for the branch-site test. When dS increases further, underestimation of dS is worse, but false positives decrease. Interestingly, the detection of true positives follows a similar distribution, with a maximum for intermediary values of dS. Thus, high dS is more of a concern for a loss of power (false negatives) than for false positives of the test. Second, we investigated the impact of GC content. We showed that there is no significant difference of false positives between high GC (up to similar to 80%) and low GC (similar to 30%) genes. Moreover, neither shifts of GC content on a specific branch nor major shifts in GC along the gene sequence generate many false positives. Our results confirm that the branch-site is a very conservative test.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Well-established examples of genetic epistasis between a pair of loci typically show characteristic patterns of phenotypic distributions in joint genotype tables. However, inferring epistasis given such data is difficult due to the lack of power in commonly used approaches, which decompose the epistatic patterns into main plus interaction effects followed by testing the interaction term. Testing additive-only or all terms may have more power, but they are sensitive to nonepistatic patterns. Alternatively, the epistatic patterns of interest can be enumerated and the best matching one is found by searching through the possibilities. Although this approach requires multiple testing correction over possible patterns, each pattern can be fitted with a regression model with just one degree of freedom and thus the overall power can still be high, if the number of possible patterns is limited. Here we compare the power of the linear decomposition and pattern search methods, by applying them to simulated data generated under several patterns of joint genotype effects with simple biological interpretations. Interaction-only tests are the least powerful; while pattern search approach is the most powerful if the range of possibilities is restricted, but still includes the true pattern.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Power is a fundamental force in social relationships and is pervasive throughout various types of interactions. Although research has shown that the possession of power can change the powerholder, the full extent of power's consequences on individuals' decision making capabilities and social interactions within organizations is not fully understood. The goal of this paper is to review, synthesize, and critique the literature on power with a focus on its organizational and managerial implications. Specifically, we propose a definition of power that takes into account its three defining characteristics-having the discretion and means to enforce one's will-and summarize the extant literature on how power influences individuals' thoughts, emotions, and actions both in terms of prosocial and antisocial outcomes. In addition, we highlight important moderators of power and describe ways in which it can be studied in a more rigorous manner by examining methodological issues and pitfalls with regard to its measurement and manipulation. We also provide future research directions to motivate and guide the study of power by management scholars. Our desire is to present a thorough and parsimonious account of power's influence on individuals within an organizational context, as well as provide a foundation that scholars can build upon as they continue to make consequential contributions to the study of power.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Errors in the inferred multiple sequence alignment may lead to false prediction of positive selection. Recently, methods for detecting unreliable alignment regions were developed and were shown to accurately identify incorrectly aligned regions. While removing unreliable alignment regions is expected to increase the accuracy of positive selection inference, such filtering may also significantly decrease the power of the test, as positively selected regions are fast evolving, and those same regions are often those that are difficult to align. Here, we used realistic simulations that mimic sequence evolution of HIV-1 genes to test the hypothesis that the performance of positive selection inference using codon models can be improved by removing unreliable alignment regions. Our study shows that the benefit of removing unreliable regions exceeds the loss of power due to the removal of some of the true positively selected sites.