872 resultados para Power of Veto
Resumo:
All holes drilled during Leg 114 contained ice-rafted debris. Analysis of samples from Hole 699A, Site 701, and Hole 704A yielded a nearly complete history of ice-rafting episodes. The first influx of ice-rafted debris at Site 699, on the northeastern slope of the Northeast Georgia Rise, occurred at a depth of 69.94 m below seafloor (mbsf) in sediments of early Miocene age (23.54 Ma). This material is of the same type as later ice-rafted debris, but represents only a small percentage of the coarse fraction. Significant ice-rafting episodes occurred during Chron 5. Minor amounts of ice-rafted debris first reached Site 701, on the western flank of the Mid-Atlantic Ridge (8.78 Ma at 200.92 mbsf), and more arrived in the late Miocene (5.88 Ma). The first significant quantity of sand and gravel appeared at a depth of 107.76 mbsf (4.42 Ma). Site 704, on the southern part of the Meteor Rise, received very little or no ice-rafted debris prior to 2.46 Ma. At this time, however, the greatest influx of ice-rafted debris occurred at this site. This time of maximum ice rafting correlates reasonably well with influxes of ice-rafted debris at Sites 701 (2.24 Ma) and 699 (2.38 Ma), in consideration of sample spacing at these two sites. These peaks of ice rafting may be Sirius till equivalents, if the proposed Pliocene age of Sirius tills can be confirmed. After about 1.67 Ma, the apparent mass-accumulation rate of the sediments at Site 704 declined, but with major fluctuations. This decline may be the result of a decrease in the rate of delivery of detritus from Antarctica due to reduced erosive power of the glaciers or a northward shift in the Polar Front Zone, a change in the path taken by the icebergs, or any combination of these factors.
Resumo:
The ecological theory of adaptive radiation predicts that the evolution of phenotypic diversity within species is generated by divergent natural selection arising from different environments and competition between species. Genetic connectivity among populations is likely also to have an important role in both the origin and maintenance of adaptive genetic diversity. Our goal was to evaluate the potential roles of genetic connectivity and natural selection in the maintenance of adaptive phenotypic differences among morphs of Arctic charr, Salvelinus alpinus, in Iceland. At a large spatial scale, we tested the predictive power of geographic structure and phenotypic variation for patterns of neutral genetic variation among populations throughout Iceland. At a smaller scale, we evaluated the genetic differentiation between two morphs in Lake Thingvallavatn relative to historically explicit, coalescent-based null models of the evolutionary history of these lineages. At the large spatial scale, populations are highly differentiated, but weakly structured, both geographically and with respect to patterns of phenotypic variation. At the intralacustrine scale, we observe modest genetic differentiation between two morphs, but this level of differentiation is nonetheless consistent with strong reproductive isolation throughout the Holocene. Rather than a result of the homogenizing effect of gene flow in a system at migration-drift equilibrium, the modest level of genetic differentiation could equally be a result of slow neutral divergence by drift in large populations. We conclude that contemporary and recent patterns of restricted gene flow have been highly conducive to the evolution and maintenance of adaptive genetic variation in Icelandic Arctic charr.
Resumo:
Detailed information about the sediment properties and microstructure can be provided through the analysis of digital ultrasonic P wave seismograms recorded automatically during full waveform core logging. The physical parameter which predominantly affects the elastic wave propagation in water-saturated sediments is the P wave attenuation coefficient. The related sedimentological parameter is the grain size distribution. A set of high-resolution ultrasonic transmission seismograms (-50-500 kHz), which indicate downcore variations in the grain size by their signal shape and frequency content, are presented. Layers of coarse-grained foraminiferal ooze can be identified by highly attenuated P waves, whereas almost unattenuated waves are recorded in fine-grained areas of nannofossil ooze. Color -encoded pixel graphics of the seismograms and instantaneous frequencies present full waveform images of the lithology and attenuation. A modified spectral difference method is introduced to determine the attenuation coefficient and its power law a = kF. Applied to synthetic seismograms derived using a "constant Q" model, even low attenuation coefficients can be quantified. A downcore analysis gives an attenuation log which ranges from -700 dB/m at 400 kHz and a power of n=1-2 in coarse-grained sands to few decibels per meter and n :s; 0.5 in fine-grained clays. A least squares fit of a second degree polynomial describes the mutual relationship between the mean grain size and the attenuation coefficient. When it is used to predict the mean grain size, an almost perfect coincidence with the values derived from sedimentological measurements is achieved.
Resumo:
BACKGROUND Zebrafish is a clinically-relevant model of heart regeneration. Unlike mammals, it has a remarkable heart repair capacity after injury, and promises novel translational applications. Amputation and cryoinjury models are key research tools for understanding injury response and regeneration in vivo. An understanding of the transcriptional responses following injury is needed to identify key players of heart tissue repair, as well as potential targets for boosting this property in humans. RESULTS We investigated amputation and cryoinjury in vivo models of heart damage in the zebrafish through unbiased, integrative analyses of independent molecular datasets. To detect genes with potential biological roles, we derived computational prediction models with microarray data from heart amputation experiments. We focused on a top-ranked set of genes highly activated in the early post-injury stage, whose activity was further verified in independent microarray datasets. Next, we performed independent validations of expression responses with qPCR in a cryoinjury model. Across in vivo models, the top candidates showed highly concordant responses at 1 and 3 days post-injury, which highlights the predictive power of our analysis strategies and the possible biological relevance of these genes. Top candidates are significantly involved in cell fate specification and differentiation, and include heart failure markers such as periostin, as well as potential new targets for heart regeneration. For example, ptgis and ca2 were overexpressed, while usp2a, a regulator of the p53 pathway, was down-regulated in our in vivo models. Interestingly, a high activity of ptgis and ca2 has been previously observed in failing hearts from rats and humans. CONCLUSIONS We identified genes with potential critical roles in the response to cardiac damage in the zebrafish. Their transcriptional activities are reproducible in different in vivo models of cardiac injury.
Resumo:
Before 1982 Mexico's welfare state regime was a limited conservative one that put priority on the social security of organized labor. But following the country's debt crisis in 1982, this regime changed to a hybrid liberal model. The Ernest Zedillo government (1995-2000) in particular pushed ahead with liberal reform of the social security system. This paper examines the characteristics and the policy making of the social security reforms in the 1990s. The results suggest that underlying these reforms was the restructuring of the economy and the need to cope with the cost of this restructuring. The paper also points out that one of the main factors making possible the rapid execution of the reforms were the weakened political clout of the officialist labor unions due to their steady breakdown during the 1990s and the increase in the monopolistic power of the state vis-a-vis the position of labor during the negotiations on social security reforms.
Resumo:
The executive - legislative relations in the Philippines have been described in two contrasting stories, namely the "strong president" story, and the "strong congress" story. This paper tries to consolidate the existing arguments and propose a new perspective focusing on the "compromise exchange" between the president and the congress across the different policy areas. It considers that the policy outcome is not brought by unilateral power of the president or the congress, but formed as the product of such an exchange. Interaction of powers and their complementary function are addressed. Furthermore, aside from the constitutional power, the weak party discipline is pointed out as a key factor in making the exchange possible.
Resumo:
This paper explores intra-state disparity in access to electricity and examines the determinants of electrification at the village level in Bihar, one of the underdeveloped states in India. Our field survey of 80 villages in 5 districts conducted in 2008-09 found that 48 villages (60%) are electrified when using the definition of electrification that a village is electrified if any one household in the village is connected to electricity. The degrees of “electrification” in terms of the proportion of household connection and available hours of electricity remain by and large low, and at the same time differ across districts, villages and seasons. In the processes of electrification, approximately 40% of villages have been electrified in recent years. Based on the basic findings of the survey, this paper examines the electrification processes and how it has changed in recent years. The econometric analyses demonstrate that location is the most important determinant of a village’s electricity connection. Another important finding is that with the rapid progress of rural electrificationunder the recent government programme and the tendency to connect the villages which are easily accessible, the collective bargaining power of the village, which used to significantly affect the process of electrification, has lost influence. This adversely affects remote villages. In order to extend electricity supplies to remote and geographically disadvantaged villages, the government needs to consider seriously other options for sustainable electricity supply, such as decentralized distribution of electricity rather than the conventional connection through the national/local grids.
Resumo:
All meta-analyses should include a heterogeneity analysis. Even so, it is not easy to decide whether a set of studies are homogeneous or heterogeneous because of the low statistical power of the statistics used (usually the Q test). Objective: Determine a set of rules enabling SE researchers to find out, based on the characteristics of the experiments to be aggregated, whether or not it is feasible to accurately detect heterogeneity. Method: Evaluate the statistical power of heterogeneity detection methods using a Monte Carlo simulation process. Results: The Q test is not powerful when the meta-analysis contains up to a total of about 200 experimental subjects and the effect size difference is less than 1. Conclusions: The Q test cannot be used as a decision-making criterion for meta-analysis in small sample settings like SE. Random effects models should be used instead of fixed effects models. Caution should be exercised when applying Q test-mediated decomposition into subgroups.
Resumo:
Background: Several meta-analysis methods can be used to quantitatively combine the results of a group of experiments, including the weighted mean difference, statistical vote counting, the parametric response ratio and the non-parametric response ratio. The software engineering community has focused on the weighted mean difference method. However, other meta-analysis methods have distinct strengths, such as being able to be used when variances are not reported. There are as yet no guidelines to indicate which method is best for use in each case. Aim: Compile a set of rules that SE researchers can use to ascertain which aggregation method is best for use in the synthesis phase of a systematic review. Method: Monte Carlo simulation varying the number of experiments in the meta analyses, the number of subjects that they include, their variance and effect size. We empirically calculated the reliability and statistical power in each case Results: WMD is generally reliable if the variance is low, whereas its power depends on the effect size and number of subjects per meta-analysis; the reliability of RR is generally unaffected by changes in variance, but it does require more subjects than WMD to be powerful; NPRR is the most reliable method, but it is not very powerful; SVC behaves well when the effect size is moderate, but is less reliable with other effect sizes. Detailed tables of results are annexed. Conclusions: Before undertaking statistical aggregation in software engineering, it is worthwhile checking whether there is any appreciable difference in the reliability and power of the methods. If there is, software engineers should select the method that optimizes both parameters.
Resumo:
The efficiency of power optimization tools depends on information on design power provided by the power estimation models. Power models targeting different power groups can enable fast identification of the most power consuming parts of design and their properties. The accuracy of these estimation models is highly dependent on the accuracy of the method used for their characterization. The highest precision is achieved by using physical onboard measurements. In this paper, we present a measurement methodology that is primarily aimed at calibrating and validating high-level dynamic power estimation models. The measurements have been carefully designed to enable the separation of the interconnect power from the logic power and the power of the clock circuitry, so that each of these power groups can be used for the corresponding model validation. The standard measurement uncertainty is lower than 2% of the measured value even with a very small number of repeated measurements. Additionally, the accuracy of a commercial low-level power estimation tool has been also assessed for comparison purposes. The results indicate that the tool is not suitable for power estimation of data path-oriented designs.
Resumo:
The main objective of this paper is to review the state of the art of residential PV systems in Belgium by the analysis of the operational data of 993 installations. For that, three main questions are posed: how much energy do they produce? What level of performance is associated to their production? Which are the key parameters that most influence their quality? This work brings answers to these questions. A middling commercial PV system, optimally oriented, produces a mean annual energy of 892 kWh/kWp. As a whole, the orientation of PV generators causes energy productions to be some 6% inferior to optimally oriented PV systems. The mean performance ratio is 78% and the mean performance index is 85%. That is to say, the energy produced by a typical PV system in Belgium is 15% inferior to the energy produced by a very high quality PV system. Finally, on average, the real power of the PV modules falls 5% below its corresponding nominal power announced on the manufacturer's datasheet. Differences between real and nominal power of up to 16% have been detected.
Resumo:
The main objective of this paper is to review the state of the art of residential PV systems in France. This is done analyzing the operational data of 6868 installations. Three main questions are posed. How much energy do they produce? What level of performance is associated to their production? Which are the key parameters that most influence their quality? During the year 2010, the PV systems in France have produced a mean annual energy of 1163 kWh/kWp. As a whole, the orientation of PV generators causes energy productions to be some 7% inferior to optimally oriented PV systems. The mean Performance Ratio is 76% and the mean Performance Index is 85%. That is to say, the energy produced by a typical PV system in France is 15% inferior to the energy produced by a very high quality PV system. On average, the real power of the PV modules falls 4.9% below its corresponding nominal power announced on the manufacturer's datasheet. A brief analysis by PV modules technology has led to relevant observations about two technologies in particular. On the one hand, the PV systems equipped with heterojunction with intrinsic thin layer (HIT) modules show performances higher than average. On the other hand, the systems equipped with the copper indium (di)selenide (CIS) modules show a real power that is 16% lower than their nominal value.
Resumo:
The main objective of this paper is to review the state of the art of residential PV systems in France and Belgium. This is done analyzing the operational data of 10650 PV systems (9657 located in France and 993 in Belgium). Three main questions are posed. How much energy do they produce? What level of performance is associated to their production? Which are the key parameters that most influence their quality? During the year 2010, the PV systems in France have produced a mean annual energy of 1163 kWh/kWp in France and 852 kWh/kWp in Belgium. As a whole, the orientation of PV generators causes energy productions to be some 7% inferior to optimally oriented PV systems. The mean Performance Ratio is 76% in France and 78% in Belgium, and the mean Performance Index is 85% in both countries. On average, the real power of the PV modules falls 4.9% below its corresponding nominal power announced on the manufacturer?s datasheet. A brief analysis by PV modules technology has lead to relevant observations about two technologies in particular. On the one hand, the PV systems equipped with Heterojunction with Intrinsic. Thin layer (HIT) modules show performances higher than average. On the other hand, the systems equipped with Copper Indium (di)Selenide (CIS) modules show a real power that is 16 % lower than their nominal value.
Resumo:
Energy management has always been recognized as a challenge in mobile systems, especially in modern OS-based mobile systems where multi-functioning are widely supported. Nowadays, it is common for a mobile system user to run multiple applications simultaneously while having a target battery lifetime in mind for a specific application. Traditional OS-level power management (PM) policies make their best effort to save energy under performance constraint, but fail to guarantee a target lifetime, leaving the painful trading off between the total performance of applications and the target lifetime to the user itself. This thesis provides a new way to deal with the problem. It is advocated that a strong energy-aware PM scheme should first guarantee a user-specified battery lifetime to a target application by restricting the average power of those less important applications, and in addition to that, maximize the total performance of applications without harming the lifetime guarantee. As a support, energy, instead of CPU or transmission bandwidth, should be globally managed as the first-class resource by the OS. As the first-stage work of a complete PM scheme, this thesis presents the energy-based fair queuing scheduling, a novel class of energy-aware scheduling algorithms which, in combination with a mechanism of battery discharge rate restricting, systematically manage energy as the first-class resource with the objective of guaranteeing a user-specified battery lifetime for a target application in OS-based mobile systems. Energy-based fair queuing is a cross-application of the traditional fair queuing in the energy management domain. It assigns a power share to each task, and manages energy by proportionally serving energy to tasks according to their assigned power shares. The proportional energy use establishes proportional share of the system power among tasks, which guarantees a minimum power for each task and thus, avoids energy starvation on any task. Energy-based fair queuing treats all tasks equally as one type and supports periodical time-sensitive tasks by allocating each of them a share of system power that is adequate to meet the highest energy demand in all periods. However, an overly conservative power share is usually required to guarantee the meeting of all time constraints. To provide more effective and flexible support for various types of time-sensitive tasks in general purpose operating systems, an extra real-time friendly mechanism is introduced to combine priority-based scheduling into the energy-based fair queuing. Since a method is available to control the maximum time one time-sensitive task can run with priority, the power control and time-constraint meeting can be flexibly traded off. A SystemC-based test-bench is designed to assess the algorithms. Simulation results show the success of the energy-based fair queuing in achieving proportional energy use, time-constraint meeting, and a proper trading off between them. La gestión de energía en los sistema móviles está considerada hoy en día como un reto fundamental, notándose, especialmente, en aquellos terminales que utilizando un sistema operativo implementan múltiples funciones. Es común en los sistemas móviles actuales ejecutar simultaneamente diferentes aplicaciones y tener, para una de ellas, un objetivo de tiempo de uso de la batería. Tradicionalmente, las políticas de gestión de consumo de potencia de los sistemas operativos hacen lo que está en sus manos para ahorrar energía y satisfacer sus requisitos de prestaciones, pero no son capaces de proporcionar un objetivo de tiempo de utilización del sistema, dejando al usuario la difícil tarea de buscar un compromiso entre prestaciones y tiempo de utilización del sistema. Esta tesis, como contribución, proporciona una nueva manera de afrontar el problema. En ella se establece que un esquema de gestión de consumo de energía debería, en primer lugar, garantizar, para una aplicación dada, un tiempo mínimo de utilización de la batería que estuviera especificado por el usuario, restringiendo la potencia media consumida por las aplicaciones que se puedan considerar menos importantes y, en segundo lugar, maximizar las prestaciones globales sin comprometer la garantía de utilización de la batería. Como soporte de lo anterior, la energía, en lugar del tiempo de CPU o el ancho de banda, debería gestionarse globalmente por el sistema operativo como recurso de primera clase. Como primera fase en el desarrollo completo de un esquema de gestión de consumo, esta tesis presenta un algoritmo de planificación de encolado equitativo (fair queueing) basado en el consumo de energía, es decir, una nueva clase de algoritmos de planificación que, en combinación con mecanismos que restrinjan la tasa de descarga de una batería, gestionen de forma sistemática la energía como recurso de primera clase, con el objetivo de garantizar, para una aplicación dada, un tiempo de uso de la batería, definido por el usuario, en sistemas móviles empotrados. El encolado equitativo de energía es una extensión al dominio de la energía del encolado equitativo tradicional. Esta clase de algoritmos asigna una reserva de potencia a cada tarea y gestiona la energía sirviéndola de manera proporcional a su reserva. Este uso proporcional de la energía garantiza que cada tarea reciba una porción de potencia y evita que haya tareas que se vean privadas de recibir energía por otras con un comportamiento más ambicioso. Esta clase de algoritmos trata a todas las tareas por igual y puede planificar tareas periódicas en tiempo real asignando a cada una de ellas una reserva de potencia que es adecuada para proporcionar la mayor de las cantidades de energía demandadas por período. Sin embargo, es posible demostrar que sólo se consigue cumplir con los requisitos impuestos por todos los plazos temporales con reservas de potencia extremadamente conservadoras. En esta tesis, para proporcionar un soporte más flexible y eficiente para diferentes tipos de tareas de tiempo real junto con el resto de tareas, se combina un mecanismo de planificación basado en prioridades con el encolado equitativo basado en energía. En esta clase de algoritmos, gracias al método introducido, que controla el tiempo que se ejecuta con prioridad una tarea de tiempo real, se puede establecer un compromiso entre el cumplimiento de los requisitos de tiempo real y el consumo de potencia. Para evaluar los algoritmos, se ha diseñado en SystemC un banco de pruebas. Los resultados muestran que el algoritmo de encolado equitativo basado en el consumo de energía consigue el balance entre el uso proporcional a la energía reservada y el cumplimiento de los requisitos de tiempo real.
Resumo:
Abstract interpretation-based data-flow analysis of logic programs is, at this point, relatively well understood from the point of view of general frameworks and abstract domains. On the other hand, comparatively little attention has been given to the problems which arise when analysis of a full, practical dialect of the Prolog language is attempted, and only few solutions to these problems have been proposed to date. Existing proposals generally restrict in one way or another the classes of programs which can be analyzed. This paper attempts to fill this gap by considering a full dialect of Prolog, essentially the recent ISO standard, pointing out the problems that may arise in the analysis of such a dialect, and proposing a combination of known and novel solutions that together allow the correct analysis of arbitrary programs which use the full power of the language.