993 resultados para Polynomial time hierarchy
Resumo:
Transcription by RNA polymerase I in Saccharomyces cerevisiae requires a series of transcription factors that have been genetically and biochemically identified. In particular, the core factor (CF) and the upstream activation factor (UAF) have been shown in vitro to bind the core element and the upstream promoter element, respectively. We have analyzed in vivo the DNAse I footprinting of the 35S promoter in wild-type and mutant strains lacking one specific transcription factor at the time. In this way we were able to unambiguously attribute the protections by the CF and the UAF to their respective putative binding sites. In addition, we have found that in vivo a binding hierarchy exists, the UAF being necessary for CF binding. Because the CF footprinting is lost in mutants lacking a functional RNA polymerase I, we also conclude that the final step of preinitiation-complex assembly affects binding of the CF, stabilizing its contact with DNA. Thus, in vivo, the CF is recruited to the core element by the UAF and stabilized on DNA by the presence of a functional RNA polymerase I.
Resumo:
The article addresses the analysis of time images furnished by a qualitative research made in Spain on the relations of working time and family/personal time. The analysis focuses on three widespread time metaphors used in day-to-day speeches by social agents. The first one is the metaphor of time as resource for action. Its value is equally economical, moral and political. Used in different context of action, it may mean something that can be either invested, donated generously to others, appropriated for caring for oneself, or spent without purpose with others. The second metaphor represents time as an external environment to which action must adapt. This metaphor shows many variants that represent time as a dynamic/static, repetitive/innovative, ordered/chaotic environment. In this external environment, the agents must resolve the problems of temporal embeddedness, hierarchy and synchronization of their actions. The third metaphor shows time as a horizon of action intentionality where the agents try to construct the meaning of their action and identity. Within this horizon the construction of a significant narrative connecting past and present experiences with future expectations is possible.
Resumo:
Efficient hardware implementations of arithmetic operations in the Galois field are highly desirable for several applications, such as coding theory, computer algebra and cryptography. Among these operations, multiplication is of special interest because it is considered the most important building block. Therefore, high-speed algorithms and hardware architectures for computing multiplication are highly required. In this paper, bit-parallel polynomial basis multipliers over the binary field GF(2(m)) generated using type II irreducible pentanomials are considered. The multiplier here presented has the lowest time complexity known to date for similar multipliers based on this type of irreducible pentanomials.
Resumo:
This paper explores potential for the RAMpage memory hierarchy to use a microkernel with a small memory footprint, in a specialized cache-speed static RAM (tightly-coupled memory, TCM). Dreamy memory is DRAM kept in low-power mode, unless referenced. Simulations show that a small microkernel suits RAMpage well, in that it achieves significantly better speed and energy gains than a standard hierarchy from adding TCM. RAMpage, in its best 128KB L2 case, gained 11% speed using TCM, and reduced energy 14%. Equivalent conventional hierarchy gains were under 1%. While 1MB L2 was significantly faster against lower-energy cases for the smaller L2, the larger SRAM's energy does not justify the speed gain. Using a 128KB L2 cache in a conventional architecture resulted in a best-case overall run time of 2.58s, compared with the best dreamy mode run time (RAMpage without context switches on misses) of 3.34s, a speed penalty of 29%. Energy in the fastest 128KB L2 case was 2.18J vs. 1.50J, a reduction of 31%. The same RAMpage configuration without dreamy mode took 2.83s as simulated, and used 2.39J, an acceptable trade-off (penalty under 10%) for being able to switch easily to a lower-energy mode.
Resumo:
Time, cost and quality achievements on large-scale construction projects are uncertain because of technological constraints, involvement of many stakeholders, long durations, large capital requirements and improper scope definitions. Projects that are exposed to such an uncertain environment can effectively be managed with the application of risk management throughout the project life cycle. Risk is by nature subjective. However, managing risk subjectively poses the danger of non-achievement of project goals. Moreover, risk analysis of the overall project also poses the danger of developing inappropriate responses. This article demonstrates a quantitative approach to construction risk management through an analytic hierarchy process (AHP) and decision tree analysis. The entire project is classified to form a few work packages. With the involvement of project stakeholders, risky work packages are identified. As all the risk factors are identified, their effects are quantified by determining probability (using AHP) and severity (guess estimate). Various alternative responses are generated, listing the cost implications of mitigating the quantified risks. The expected monetary values are derived for each alternative in a decision tree framework and subsequent probability analysis helps to make the right decision in managing risks. In this article, the entire methodology is explained by using a case application of a cross-country petroleum pipeline project in India. The case study demonstrates the project management effectiveness of using AHP and DTA.
Resumo:
The existing method of pipeline health monitoring, which requires an entire pipeline to be inspected periodically, is both time-wasting and expensive. A risk-based model that reduces the amount of time spent on inspection has been presented. This model not only reduces the cost of maintaining petroleum pipelines, but also suggests an efficient design and operation philosophy, construction methodology, and logical insurance plans. The risk-based model uses the analytic hierarchy process (AHP), a multiple-attribute decision-making technique, to identify the factors that influence failure on specific segments and to analyze their effects by determining probability of risk factors. The severity of failure is determined through consequence analysis. From this, the effect of a failure caused by each risk factor can be established in terms of cost, and the cumulative effect of failure is determined through probability analysis. The technique does not totally eliminate subjectivity, but it is an improvement over the existing inspection method.
Resumo:
We have investigated how optimal coding for neural systems changes with the time available for decoding. Optimization was in terms of maximizing information transmission. We have estimated the parameters for Poisson neurons that optimize Shannon transinformation with the assumption of rate coding. We observed a hierarchy of phase transitions from binary coding, for small decoding times, toward discrete (M-ary) coding with two, three and more quantization levels for larger decoding times. We postulate that the presence of subpopulations with specific neural characteristics could be a signiture of an optimal population coding scheme and we use the mammalian auditory system as an example.
Resumo:
Membrane systems are computational equivalent to Turing machines. However, its distributed and massively parallel nature obtain polynomial solutions opposite to traditional non-polynomial ones. Nowadays, developed investigation for implementing membrane systems has not yet reached the massively parallel character of this computational model. Better published approaches have achieved a distributed architecture denominated “partially parallel evolution with partially parallel communication” where several membranes are allocated at each processor, proxys are used to communicate with membranes allocated at different processors and a policy of access control to the communications is mandatory. With these approaches, it is obtained processors parallelism in the application of evolution rules and in the internal communication among membranes allocated inside each processor. Even though, external communications share a common communication line, needed for the communication among membranes arranged in different processors, are sequential. In this work, we present a new hierarchical architecture that reaches external communication parallelism among processors and substantially increases parallelization in the application of evolution rules and internal communications. Consequently, necessary time for each evolution step is reduced. With all of that, this new distributed hierarchical architecture is near to the massively parallel character required by the model.
Resumo:
ACM Computing Classification System (1998): F.2.1, G.1.5, I.1.2.
Resumo:
Limited literature regarding parameter estimation of dynamic systems has been identified as the central-most reason for not having parametric bounds in chaotic time series. However, literature suggests that a chaotic system displays a sensitive dependence on initial conditions, and our study reveals that the behavior of chaotic system: is also sensitive to changes in parameter values. Therefore, parameter estimation technique could make it possible to establish parametric bounds on a nonlinear dynamic system underlying a given time series, which in turn can improve predictability. By extracting the relationship between parametric bounds and predictability, we implemented chaos-based models for improving prediction in time series. ^ This study describes work done to establish bounds on a set of unknown parameters. Our research results reveal that by establishing parametric bounds, it is possible to improve the predictability of any time series, although the dynamics or the mathematical model of that series is not known apriori. In our attempt to improve the predictability of various time series, we have established the bounds for a set of unknown parameters. These are: (i) the embedding dimension to unfold a set of observation in the phase space, (ii) the time delay to use for a series, (iii) the number of neighborhood points to use for avoiding detection of false neighborhood and, (iv) the local polynomial to build numerical interpolation functions from one region to another. Using these bounds, we are able to get better predictability in chaotic time series than previously reported. In addition, the developments of this dissertation can establish a theoretical framework to investigate predictability in time series from the system-dynamics point of view. ^ In closing, our procedure significantly reduces the computer resource usage, as the search method is refined and efficient. Finally, the uniqueness of our method lies in its ability to extract chaotic dynamics inherent in non-linear time series by observing its values. ^
Resumo:
An array of Bio-Argo floats equipped with radiometric sensors has been recently deployed in various open ocean areas representative of the diversity of trophic and bio-optical conditions prevailing in the so-called Case 1 waters. Around solar noon and almost everyday, each float acquires 0-250 m vertical profiles of Photosynthetically Available Radiation and downward irradiance at three wavelengths (380, 412 and 490 nm). Up until now, more than 6500 profiles for each radiometric channel have been acquired. As these radiometric data are collected out of operator’s control and regardless of meteorological conditions, specific and automatic data processing protocols have to be developed. Here, we present a data quality-control procedure aimed at verifying profile shapes and providing near real-time data distribution. This procedure is specifically developed to: 1) identify main issues of measurements (i.e. dark signal, atmospheric clouds, spikes and wave-focusing occurrences); 2) validate the final data with a hierarchy of tests to ensure a scientific utilization. The procedure, adapted to each of the four radiometric channels, is designed to flag each profile in a way compliant with the data management procedure used by the Argo program. Main perturbations in the light field are identified by the new protocols with good performances over the whole dataset. This highlights its potential applicability at the global scale. Finally, the comparison with modeled surface irradiances allows assessing the accuracy of quality-controlled measured irradiance values and identifying any possible evolution over the float lifetime due to biofouling and instrumental drift.
Resumo:
An array of Bio-Argo floats equipped with radiometric sensors has been recently deployed in various open ocean areas representative of the diversity of trophic and bio-optical conditions prevailing in the so-called Case 1 waters. Around solar noon and almost everyday, each float acquires 0-250 m vertical profiles of Photosynthetically Available Radiation and downward irradiance at three wavelengths (380, 412 and 490 nm). Up until now, more than 6500 profiles for each radiometric channel have been acquired. As these radiometric data are collected out of operator’s control and regardless of meteorological conditions, specific and automatic data processing protocols have to be developed. Here, we present a data quality-control procedure aimed at verifying profile shapes and providing near real-time data distribution. This procedure is specifically developed to: 1) identify main issues of measurements (i.e. dark signal, atmospheric clouds, spikes and wave-focusing occurrences); 2) validate the final data with a hierarchy of tests to ensure a scientific utilization. The procedure, adapted to each of the four radiometric channels, is designed to flag each profile in a way compliant with the data management procedure used by the Argo program. Main perturbations in the light field are identified by the new protocols with good performances over the whole dataset. This highlights its potential applicability at the global scale. Finally, the comparison with modeled surface irradiances allows assessing the accuracy of quality-controlled measured irradiance values and identifying any possible evolution over the float lifetime due to biofouling and instrumental drift.
Resumo:
Slot and van Emde Boas Invariance Thesis states that a time (respectively, space) cost model is reasonable for a computational model C if there are mutual simulations between Turing machines and C such that the overhead is polynomial in time (respectively, linear in space). The rationale is that under the Invariance Thesis, complexity classes such as LOGSPACE, P, PSPACE, become robust, i.e. machine independent. In this dissertation, we want to find out if it possible to define a reasonable space cost model for the lambda-calculus, the paradigmatic model for functional programming languages. We start by considering an unusual evaluation mechanism for the lambda-calculus, based on Girard's Geometry of Interaction, that was conjectured to be the key ingredient to obtain a space reasonable cost model. By a fine complexity analysis of this schema, based on new variants of non-idempotent intersection types, we disprove this conjecture. Then, we change the target of our analysis. We consider a variant over Krivine's abstract machine, a standard evaluation mechanism for the call-by-name lambda-calculus, optimized for space complexity, and implemented without any pointer. A fine analysis of the execution of (a refined version of) the encoding of Turing machines into the lambda-calculus allows us to conclude that the space consumed by this machine is indeed a reasonable space cost model. In particular, for the first time we are able to measure also sub-linear space complexities. Moreover, we transfer this result to the call-by-value case. Finally, we provide also an intersection type system that characterizes compositionally this new reasonable space measure. This is done through a minimal, yet non trivial, modification of the original de Carvalho type system.
Resumo:
Corynebacterium species (spp.) are among the most frequently isolated pathogens associated with subclinical mastitis in dairy cows. However, simple, fast, and reliable methods for the identification of species of the genus Corynebacterium are not currently available. This study aimed to evaluate the usefulness of matrix-assisted laser desorption ionization/mass spectrometry (MALDI-TOF MS) for identifying Corynebacterium spp. isolated from the mammary glands of dairy cows. Corynebacterium spp. were isolated from milk samples via microbiological culture (n=180) and were analyzed by MALDI-TOF MS and 16S rRNA gene sequencing. Using MALDI-TOF MS methodology, 161 Corynebacterium spp. isolates (89.4%) were correctly identified at the species level, whereas 12 isolates (6.7%) were identified at the genus level. Most isolates that were identified at the species level with 16 S rRNA gene sequencing were identified as Corynebacterium bovis (n=156; 86.7%) were also identified as C. bovis with MALDI-TOF MS. Five Corynebacterium spp. isolates (2.8%) were not correctly identified at the species level with MALDI-TOF MS and 2 isolates (1.1%) were considered unidentified because despite having MALDI-TOF MS scores >2, only the genus level was correctly identified. Therefore, MALDI-TOF MS could serve as an alternative method for species-level diagnoses of bovine intramammary infections caused by Corynebacterium spp.
Resumo:
Matrix-assisted laser desorption/ionization time-of flight mass spectrometry (MALDI-TOF MS) has been widely used for the identification and classification of microorganisms based on their proteomic fingerprints. However, the use of MALDI-TOF MS in plant research has been very limited. In the present study, a first protocol is proposed for metabolic fingerprinting by MALDI-TOF MS using three different MALDI matrices with subsequent multivariate data analysis by in-house algorithms implemented in the R environment for the taxonomic classification of plants from different genera, families and orders. By merging the data acquired with different matrices, different ionization modes and using careful algorithms and parameter selection, we demonstrate that a close taxonomic classification can be achieved based on plant metabolic fingerprints, with 92% similarity to the taxonomic classifications found in literature. The present work therefore highlights the great potential of applying MALDI-TOF MS for the taxonomic classification of plants and, furthermore, provides a preliminary foundation for future research.