928 resultados para Uniform coverage
Resumo:
The furious pace of Moore's Law is driving computer architecture into a realm where the the speed of light is the dominant factor in system latencies. The number of clock cycles to span a chip are increasing, while the number of bits that can be accessed within a clock cycle is decreasing. Hence, it is becoming more difficult to hide latency. One alternative solution is to reduce latency by migrating threads and data, but the overhead of existing implementations has previously made migration an unserviceable solution so far. I present an architecture, implementation, and mechanisms that reduces the overhead of migration to the point where migration is a viable supplement to other latency hiding mechanisms, such as multithreading. The architecture is abstract, and presents programmers with a simple, uniform fine-grained multithreaded parallel programming model with implicit memory management. In other words, the spatial nature and implementation details (such as the number of processors) of a parallel machine are entirely hidden from the programmer. Compiler writers are encouraged to devise programming languages for the machine that guide a programmer to express their ideas in terms of objects, since objects exhibit an inherent physical locality of data and code. The machine implementation can then leverage this locality to automatically distribute data and threads across the physical machine by using a set of high performance migration mechanisms. An implementation of this architecture could migrate a null thread in 66 cycles -- over a factor of 1000 improvement over previous work. Performance also scales well; the time required to move a typical thread is only 4 to 5 times that of a null thread. Data migration performance is similar, and scales linearly with data block size. Since the performance of the migration mechanism is on par with that of an L2 cache, the implementation simulated in my work has no data caches and relies instead on multithreading and the migration mechanism to hide and reduce access latencies.
Resumo:
The study examines how, from the traditional work of the independent artisan, we have moved to autonomous work integrated within networks of specialized businesses. This modality is owed not only to the manner in which labor is organized today, to government stimuli, to actions of multilaterals, but also to unemployment. With the purpose of humanizing independent work and rationalizing business costs, an intermediate category of autonomous worker has been created; the semi-dependent who moves between legal freedom and economic independence. The administration, for its part, focuses on broadening social coverage, not always developed for bureaucratic reasons, which is connected to the low density of contributions from the autonomous workers. The challenge put forth is that of provisional coverage for the independents, which is possible whenever citizens participate to resolve social inequality, resulting from the lack of job opportunities, low purchasing power and educational level.
Resumo:
The purpose of this article is to analyze the coverage made by CNN and Al Jazeera (in Arabic) to operation Caste Lead and the Goldstone Report during 2008 and 2009. This investigation is based in the theory of Qualitative Analysis of Content, by Wildemuth and Zhang. The methodology follows up with the one proposed by the authors in the main theory, complementing it with the Gamson and Modigliani´s Framing theory. The methodology mention above display the different in the coverage development, determined by the geopolitical influences; being CNN more influenced by a Western pro USA and pro Israeli speech, while Al Jazeera is more prone to support the Palestinian cause, this is the thesis of this article. During the development of the investigation, the thesis was demonstrated to be only partially accurate as CNN was not completely supportive to the Israeli arguments during the coverage, but Al Jazeera did have preferential speech for the Palestinian cause.
Resumo:
The response of a uniform horizontal temperature gradient to prescribed fixed heating is calculated in the context of an extended version of surface quasigeostrophic dynamics. It is found that for zero mean surface flow and weak cross-gradient structure the prescribed heating induces a mean temperature anomaly proportional to the spatial Hilbert transform of the heating. The interior potential vorticity generated by the heating enhances this surface response. The time-varying part is independent of the heating and satisfies the usual linearized surface quasigeostrophic dynamics. It is shown that the surface temperature tendency is a spatial Hilbert transform of the temperature anomaly itself. It then follows that the temperature anomaly is periodically modulated with a frequency proportional to the vertical wind shear. A strong local bound on wave energy is also found. Reanalysis diagnostics are presented that indicate consistency with key findings from this theory.
Resumo:
There is great interest in using amplified fragment length polymorphism (AFLP) markers because they are inexpensive and easy to produce. It is, therefore, possible to generate a large number of markers that have a wide coverage of species genotnes. Several statistical methods have been proposed to study the genetic structure using AFLP's but they assume Hardy-Weinberg equilibrium and do not estimate the inbreeding coefficient, F-IS. A Bayesian method has been proposed by Holsinger and colleagues that relaxes these simplifying assumptions but we have identified two sources of bias that can influence estimates based on these markers: (i) the use of a uniform prior on ancestral allele frequencies and (ii) the ascertainment bias of AFLP markers. We present a new Bayesian method that avoids these biases by using an implementation based on the approximate Bayesian computation (ABC) algorithm. This new method estimates population-specific F-IS and F-ST values and offers users the possibility of taking into account the criteria for selecting the markers that are used in the analyses. The software is available at our web site (http://www-leca.uif-grenoble.fi-/logiciels.htm). Finally, we provide advice on how to avoid the effects of ascertainment bias.
Resumo:
Stable isotope labeling combined with MS is a powerful method for measuring relative protein abundances, for instance, by differential metabolic labeling of some or all amino acids with 14N and 15N in cell culture or hydroponic media. These and most other types of quantitative proteomics experiments using high-throughput technologies, such as LC-MS/MS, generate large amounts of raw MS data. This data needs to be processed efficiently and automatically, from the mass spectrometer to statistically evaluated protein identifications and abundance ratios. This paper describes in detail an approach to the automated analysis of uniformly 14N/15N-labeled proteins using MASCOT peptide identification in conjunction with the trans-proteomic pipeline (TPP) and a few scripts to integrate the analysis workflow. Two large proteomic datasets from uniformly labeled Arabidopsis thaliana were used to illustrate the analysis pipeline. The pipeline can be fully automated and uses only common or freely available software.
Resumo:
Two polymeric azido bridged complexes [Ni2L2(N-3)(3)](n)(ClO4). (1) and [Cu(bpdS)(2)(N-3)],(ClO4),(H2O)(2.5n) (2) [L = Schiff base, obtained from the condensation of pyridine-2-aldehyde with N,N,2,2-tetramethyl-1,3-propanediamine; bpds = 4,4'-bipyridyl disulfide] have been synthesized and their crystal structures have been determined. Complex 1, C26H42ClN15Ni2O4, crystallizes in a triclinic system, space group P1 with a 8.089(13), b = 9.392(14), c = 12.267(18) angstrom, a = 107.28(l), b 95.95(1), gamma = 96.92(1)degrees and Z = 2; complex 2, C20H21ClCuN7O6.5S4, crystallizes in an orthorhombic system, space group Pnna with a = 10.839(14), b = 13.208(17), c = 19.75(2) angstrom and Z = 4. The crystal structure of I consists of 1D polymers of nickel(L) units, alternatively connected by single and double bridging mu-(1,3-N-3) ligand with isolated perchlorate anions. Variable temperature magnetic susceptibility data of the complex have been measured and the fitting,of magnetic data was carried out applying the Borris-Almenar formula for such types of alternating one-dimensional S = 1 systems, based on the Hamiltonian H = -J Sigma(S2iS2i-1 + aS(2i)S(2i+1)). The best-fit parameters obtained are J = -106.7 +/- 2 cm(-1); a = 0.82 +/- 0.02; g = 2.21 +/- 0.02. Complex 2 is a 2D network of 4,4 topology with the nodes occupied by the Cu-II ions, and the edges formed by single azide and double bpds connectors. The perchlorate anions are located between pairs of bpds. The magnetic data have been fitted considering the complex as a pseudo-one-dimensional system, with all copper((II)) atoms linked by [mu(1,3-azido) bridging ligands at axial positions (long Cu...N-3 distances) since the coupling through long bpds is almost nil. The best-fit parameters obtained with this model are J = -1.21 +/- 0.2 cm(-1), g 2.14 +/- 0.02. (c) Wiley-VCH Verlag GmbH & Co. KGaA, 69451 Weinheim, Germany, 2005).
Resumo:
Stable isotope labeling combined with MS is a powerful method for measuring relative protein abundances, for instance, by differential metabolic labeling of some or all amino acids with N-14 and N-15 in cell culture or hydroponic media. These and most other types of quantitative proteomics experiments using high-throughput technologies, such as LC-MS/MS, generate large amounts of raw MS data. This data needs to be processed efficiently and automatically, from the mass spectrometer to statistically evaluated protein identifications and abundance ratios. This paper describes in detail an approach to the automated analysis of Uniformly N-14/N-15-labeled proteins using MASCOT peptide identification in conjunction with the trans-proteomic pipeline (TPP) and a few scripts to integrate the analysis workflow. Two large proteomic datasets from uniformly labeled Arabidopsis thaliana were used to illustrate the analysis pipeline. The pipeline can be fully automated and uses only common or freely available software.