953 resultados para supported intermediates
Resumo:
The aim of this work is to solve a question raised for average sampling in shift-invariant spaces by using the well-known matrix pencil theory. In many common situations in sampling theory, the available data are samples of some convolution operator acting on the function itself: this leads to the problem of average sampling, also known as generalized sampling. In this paper we deal with the existence of a sampling formula involving these samples and having reconstruction functions with compact support. Thus, low computational complexity is involved and truncation errors are avoided. In practice, it is accomplished by means of a FIR filter bank. An answer is given in the light of the generalized sampling theory by using the oversampling technique: more samples than strictly necessary are used. The original problem reduces to finding a polynomial left inverse of a polynomial matrix intimately related to the sampling problem which, for a suitable choice of the sampling period, becomes a matrix pencil. This matrix pencil approach allows us to obtain a practical method for computing the compactly supported reconstruction functions for the important case where the oversampling rate is minimum. Moreover, the optimality of the obtained solution is established.
Resumo:
Problem-based learning has been applied over the last three decades to a diverse range of learning environments. In this educational approach, different problems are posed to the learners so that they can develop different solutions while learning about the problem domain. When applied to conceptual modelling, and particularly to Qualitative Reasoning, the solutions to problems are models that represent the behaviour of a dynamic system. The learner?s task then is to bridge the gap between their initial model, as their first attempt to represent the system, and the target models that provide solutions to that problem. We propose the use of semantic technologies and resources to help in bridging that gap by providing links to terminology and formal definitions, and matching techniques to allow learners to benefit from existing models.
Resumo:
The response of high-speed bridges at resonance, particularly under flexural vibrations, constitutes a subject of research for many scientists and engineers at the moment. The topic is of great interest because, as a matter of fact, such kind of behaviour is not unlikely to happen due to the elevated operating speeds of modern rains, which in many cases are equal to or even exceed 300 km/h ( [1,2]). The present paper addresses the subject of the evolution of the wheel-rail contact forces during resonance situations in simply supported bridges. Based on a dimensionless formulation of the equations of motion presented in [4], very similar to the one introduced by Klasztorny and Langer in [3], a parametric study is conducted and the contact forces in realistic situations analysed in detail. The effects of rail and wheel irregularities are not included in the model. The bridge is idealised as an Euler-Bernoulli beam, while the train is simulated by a system consisting of rigid bodies, springs and dampers. The situations such that a severe reduction of the contact force could take place are identified and compared with typical situations in actual bridges. To this end, the simply supported bridge is excited at resonace by means of a theoretical train consisting of 15 equidistant axles. The mechanical characteristics of all axles (unsprung mass, semi-sprung mass, and primary suspension system) are identical. This theoretical train permits the identification of the key parameters having an influence on the wheel-rail contact forces. In addition, a real case of a 17.5 m bridges traversed by the Eurostar train is analysed and checked against the theoretical results. The influence of three fundamental parameters is investigated in great detail: a) the ratio of the fundamental frequency of the bridge and natural frequency of the primary suspension of the vehicle; b) the ratio of the total mass of the bridge and the semi-sprung mass of the vehicle and c) the ratio between the length of the bridge and the characteristic distance between consecutive axles. The main conclusions derived from the investigation are: The wheel-rail contact forces undergo oscillations during the passage of the axles over the bridge. During resonance, these oscillations are more severe for the rear wheels than for the front ones. If denotes the span of a simply supported bridge, and the characteristic distance between consecutive groups of loads, the lower the value of , the greater the oscillations of the contact forces at resonance. For or greater, no likelihood of loss of wheel-rail contact has been detected. The ratio between the frequency of the primary suspension of the vehicle and the fundamental frequency of the bridge is denoted by (frequency ratio), and the ratio of the semi-sprung mass of the vehicle (mass of the bogie) and the total mass of the bridge is denoted by (mass ratio). For any given frequency ratio, the greater the mass ratio, the greater the oscillations of the contact forces at resonance. The oscillations of the contact forces at resonance, and therefore the likelihood of loss of wheel-rail contact, present a minimum for approximately between 0.5 and 1. For lower or higher values of the frequency ratio the oscillations of the contact forces increase. Neglecting the possible effects of torsional vibrations, the metal or composite bridges with a low linear mass have been found to be the ones where the contact forces may suffer the most severe oscillations. If single-track, simply supported, composite or metal bridges were used in high-speed lines, and damping ratios below 1% were expected, the minimum contact forces at resonance could drop to dangerous values. Nevertheless, this kind of structures is very unusual in modern high-speed railway lines.
Resumo:
Ambient Assisted Living (AAL) services are emerging as context-awareness solutions to support elderly people?s autonomy. The context-aware paradigm makes applications more user-adaptive. In this way, context and user models expressed in ontologies are employed by applications to describe user and environment characteristics. The rapid advance of technology allows creating context server to relieve applications of context reasoning techniques. Specifically, the Next Generation Networks (NGN) provides by means of the presence service a framework to manage the current user's state as well as the user's profile information extracted from Internet and mobile context. This paper propose a user modeling ontology for AAL services which can be deployed in a NGN environment with the aim at adapting their functionalities to the elderly's context information and state.
Resumo:
Residents learning nontechnical skills in Europe face two problems: (1) the difficulty to fit learning time in their overloaded schedules; and (2) the lack of standard pedagogical models for all countries. Online video-based repositories such as WeBSurg or WebOP provide ubiquitous access to surgical contents. However, their pedagogical facets have not been fully exploited and they are often seen as quick-reference repositories rather than full e-learning alternatives. We present a new pedagogically-supported Technology Enhanced Learning (TEL) solution, MISTELA, designed by surgeons, pedagogical experts and engineers. MISTELA aims at building a common European pedagogical model supported by ICT technologies and elearning. The solution proposes a pedagogical model based on a framework for pedagogically-informed design of e-learning platforms. It is composed of (1) an authoring tool for editing and augmenting videos; (2) a media asset management system; and (3) a virtual learning environment. Support of the European Association for Endoscopic Surgery (EAES) and validation of the solution, will help to determine its full potential.
Resumo:
Los sistemas transaccionales tales como los programas informáticos para la planificación de recursos empresariales (ERP software) se han implementado ampliamente mientras que los sistemas analíticos para la gestión de la cadena de suministro (SCM software) no han tenido el éxito deseado por la industria de tecnología de información (TI). Aunque se documentan beneficios importantes derivados de las implantaciones de SCM software, las empresas industriales son reacias a invertir en este tipo de sistemas. Por una parte esto es debido a la falta de métodos que son capaces de detectar los beneficios por emplear esos sistemas, y por otra parte porque el coste asociado no está identificado, detallado y cuantificado suficientemente. Los esquemas de coordinación basados únicamente en sistemas ERP son alternativas válidas en la práctica industrial siempre que la relación coste-beneficio esta favorable. Por lo tanto, la evaluación de formas organizativas teniendo en cuenta explícitamente el coste debido a procesos administrativos, en particular por ciclos iterativos, es de gran interés para la toma de decisiones en el ámbito de inversiones en TI. Con el fin de cerrar la brecha, el propósito de esta investigación es proporcionar métodos de evaluación que permitan la comparación de diferentes formas de organización y niveles de soporte por sistemas informáticos. La tesis proporciona una amplia introducción, analizando los retos a los que se enfrenta la industria. Concluye con las necesidades de la industria de SCM software: unas herramientas que facilitan la evaluación integral de diferentes propuestas de organización. A continuación, la terminología clave se detalla centrándose en la teoría de la organización, las peculiaridades de inversión en TI y la tipología de software de gestión de la cadena de suministro. La revisión de la literatura clasifica las contribuciones recientes sobre la gestión de la cadena de suministro, tratando ambos conceptos, el diseño de la organización y su soporte por las TI. La clasificación incluye criterios relacionados con la metodología de la investigación y su contenido. Los estudios empíricos en el ámbito de la administración de empresas se centran en tipologías de redes industriales. Nuevos algoritmos de planificación y esquemas de coordinación innovadoras se desarrollan principalmente en el campo de la investigación de operaciones con el fin de proponer nuevas funciones de software. Artículos procedentes del área de la gestión de la producción se centran en el análisis de coste y beneficio de las implantaciones de sistemas. La revisión de la literatura revela que el éxito de las TI para la coordinación de redes industriales depende en gran medida de características de tres dimensiones: la configuración de la red industrial, los esquemas de coordinación y las funcionalidades del software. La literatura disponible está enfocada sobre todo en los beneficios de las implantaciones de SCM software. Sin embargo, la coordinación de la cadena de suministro, basándose en el sistema ERP, sigue siendo la práctica industrial generalizada, pero el coste de coordinación asociado no ha sido abordado por los investigadores. Los fundamentos de diseño organizativo eficiente se explican en detalle en la medida necesaria para la comprensión de la síntesis de las diferentes formas de organización. Se han generado varios esquemas de coordinación variando los siguientes parámetros de diseño: la estructura organizativa, los mecanismos de coordinación y el soporte por TI. Las diferentes propuestas de organización desarrolladas son evaluadas por un método heurístico y otro basado en la simulación por eventos discretos. Para ambos métodos, se tienen en cuenta los principios de la teoría de la organización. La falta de rendimiento empresarial se debe a las dependencias entre actividades que no se gestionan adecuadamente. Dentro del método heurístico, se clasifican las dependencias y se mide su intensidad basándose en factores contextuales. A continuación, se valora la idoneidad de cada elemento de diseño organizativo para cada dependencia específica. Por último, cada forma de organización se evalúa basándose en la contribución de los elementos de diseño tanto al beneficio como al coste. El beneficio de coordinación se refiere a la mejora en el rendimiento logístico - este concepto es el objeto central en la mayoría de modelos de evaluación de la gestión de la cadena de suministro. Por el contrario, el coste de coordinación que se debe incurrir para lograr beneficios no se suele considerar en detalle. Procesos iterativos son costosos si se ejecutan manualmente. Este es el caso cuando SCM software no está implementada y el sistema ERP es el único instrumento de coordinación disponible. El modelo heurístico proporciona un procedimiento simplificado para la clasificación sistemática de las dependencias, la cuantificación de los factores de influencia y la identificación de configuraciones que indican el uso de formas organizativas y de soporte de TI más o menos complejas. La simulación de eventos discretos se aplica en el segundo modelo de evaluación utilizando el paquete de software ‘Plant Simulation’. Con respecto al rendimiento logístico, por un lado se mide el coste de fabricación, de inventario y de transporte y las penalizaciones por pérdida de ventas. Por otro lado, se cuantifica explícitamente el coste de la coordinación teniendo en cuenta los ciclos de coordinación iterativos. El método se aplica a una configuración de cadena de suministro ejemplar considerando diversos parámetros. Los resultados de la simulación confirman que, en la mayoría de los casos, el beneficio aumenta cuando se intensifica la coordinación. Sin embargo, en ciertas situaciones en las que se aplican ciclos de planificación manuales e iterativos el coste de coordinación adicional no siempre conduce a mejor rendimiento logístico. Estos resultados inesperados no se pueden atribuir a ningún parámetro particular. La investigación confirma la gran importancia de nuevas dimensiones hasta ahora ignoradas en la evaluación de propuestas organizativas y herramientas de TI. A través del método heurístico se puede comparar de forma rápida, pero sólo aproximada, la eficiencia de diferentes formas de organización. Por el contrario, el método de simulación es más complejo pero da resultados más detallados, teniendo en cuenta parámetros específicos del contexto del caso concreto y del diseño organizativo. ABSTRACT Transactional systems such as Enterprise Resource Planning (ERP) systems have been implemented widely while analytical software like Supply Chain Management (SCM) add-ons are adopted less by manufacturing companies. Although significant benefits are reported stemming from SCM software implementations, companies are reluctant to invest in such systems. On the one hand this is due to the lack of methods that are able to detect benefits from the use of SCM software and on the other hand associated costs are not identified, detailed and quantified sufficiently. Coordination schemes based only on ERP systems are valid alternatives in industrial practice because significant investment in IT can be avoided. Therefore, the evaluation of these coordination procedures, in particular the cost due to iterations, is of high managerial interest and corresponding methods are comprehensive tools for strategic IT decision making. The purpose of this research is to provide evaluation methods that allow the comparison of different organizational forms and software support levels. The research begins with a comprehensive introduction dealing with the business environment that industrial networks are facing and concludes highlighting the challenges for the supply chain software industry. Afterwards, the central terminology is addressed, focusing on organization theory, IT investment peculiarities and supply chain management software typology. The literature review classifies recent supply chain management research referring to organizational design and its software support. The classification encompasses criteria related to research methodology and content. Empirical studies from management science focus on network types and organizational fit. Novel planning algorithms and innovative coordination schemes are developed mostly in the field of operations research in order to propose new software features. Operations and production management researchers realize cost-benefit analysis of IT software implementations. The literature review reveals that the success of software solutions for network coordination depends strongly on the fit of three dimensions: network configuration, coordination scheme and software functionality. Reviewed literature is mostly centered on the benefits of SCM software implementations. However, ERP system based supply chain coordination is still widespread industrial practice but the associated coordination cost has not been addressed by researchers. Fundamentals of efficient organizational design are explained in detail as far as required for the understanding of the synthesis of different organizational forms. Several coordination schemes have been shaped through the variation of the following design parameters: organizational structuring, coordination mechanisms and software support. The different organizational proposals are evaluated using a heuristic approach and a simulation-based method. For both cases, the principles of organization theory are respected. A lack of performance is due to dependencies between activities which are not managed properly. Therefore, within the heuristic method, dependencies are classified and their intensity is measured based on contextual factors. Afterwards the suitability of each organizational design element for the management of a specific dependency is determined. Finally, each organizational form is evaluated based on the contribution of the sum of design elements to coordination benefit and to coordination cost. Coordination benefit refers to improvement in logistic performance – this is the core concept of most supply chain evaluation models. Unfortunately, coordination cost which must be incurred to achieve benefits is usually not considered in detail. Iterative processes are costly when manually executed. This is the case when SCM software is not implemented and the ERP system is the only available coordination instrument. The heuristic model provides a simplified procedure for the classification of dependencies, quantification of influence factors and systematic search for adequate organizational forms and IT support. Discrete event simulation is applied in the second evaluation model using the software package ‘Plant Simulation’. On the one hand logistic performance is measured by manufacturing, inventory and transportation cost and penalties for lost sales. On the other hand coordination cost is explicitly considered taking into account iterative coordination cycles. The method is applied to an exemplary supply chain configuration considering various parameter settings. The simulation results confirm that, in most cases, benefit increases when coordination is intensified. However, in some situations when manual, iterative planning cycles are applied, additional coordination cost does not always lead to improved logistic performance. These unexpected results cannot be attributed to any particular parameter. The research confirms the great importance of up to now disregarded dimensions when evaluating SCM concepts and IT tools. The heuristic method provides a quick, but only approximate comparison of coordination efficiency for different organizational forms. In contrast, the more complex simulation method delivers detailed results taking into consideration specific parameter settings of network context and organizational design.
Resumo:
Hydrogen–deuterium exchange experiments have been used previously to investigate the structures of well defined states of a given protein. These include the native state, the unfolded state, and any intermediates that can be stably populated at equilibrium. More recently, the hydrogen–deuterium exchange technique has been applied in kinetic labeling experiments to probe the structures of transiently formed intermediates on the kinetic folding pathway of a given protein. From these equilibrium and nonequilibrium studies, protection factors are usually obtained. These protection factors are defined as the ratio of the rate of exchange of a given backbone amide when it is in a fully solvent-exposed state (usually obtained from model peptides) to the rate of exchange of that amide in some state of the protein or in some intermediate on the folding pathway of the protein. This definition is straightforward for the case of equilibrium studies; however, it is less clear-cut for the case of transient kinetic intermediates. To clarify the concept for the case of burst-phase intermediates, we have introduced and mathematically defined two different types of protection factors: one is Pstruc, which is more related to the structure of the intermediate, and the other is Papp, which is more related to the stability of the intermediate. Kinetic hydrogen–deuterium exchange data from disulfide-intact ribonuclease A and from cytochrome c are discussed to explain the use and implications of these two definitions.
Resumo:
Abf2p is a high mobility group (HMG) protein found in yeast mitochondria that is required for the maintenance of wild-type (ρ+) mtDNA in cells grown on fermentable carbon sources, and for efficient recombination of mtDNA markers in crosses. Here, we show by two-dimensional gel electrophoresis that Abf2p promotes or stabilizes Holliday recombination junction intermediates in ρ+ mtDNA in vivo but does not influence the high levels of recombination intermediates readily detected in the mtDNA of petite mutants (ρ−). mtDNA recombination junctions are not observed in ρ+ mtDNA of wild-type cells but are elevated to detectable levels in cells with a null allele of the MGT1 gene (Δmgt1), which codes for a mitochondrial cruciform-cutting endonuclease. The level of recombination intermediates in ρ+ mtDNA of Δmgt1 cells is decreased about 10-fold if those cells contain a null allele of the ABF2 gene. Overproduction of Abf2p by ≥ 10-fold in wild-type ρ+ cells, which leads to mtDNA instability, results in a dramatic increase in mtDNA recombination intermediates. Specific mutations in the two Abf2p HMG boxes required for DNA binding diminishes these responses. We conclude that Abf2p functions in the recombination of ρ+ mtDNA.
Resumo:
Reactive oxygen intermediates (ROI) play a critical role in the defense of plants against invading pathogens. Produced during the “oxidative burst,” they are thought to activate programmed cell death (PCD) and induce antimicrobial defenses such as pathogenesis-related proteins. It was shown recently that during the interaction of plants with pathogens, the expression of ROI-detoxifying enzymes such as ascorbate peroxidase (APX) and catalase (CAT) is suppressed. It was suggested that this suppression, occurring upon pathogen recognition and coinciding with an enhanced rate of ROI production, plays a key role in elevating cellular ROI levels, thereby potentiating the induction of PCD and other defenses. To examine the relationship between the suppression of antioxidative mechanisms and the induction of PCD and other defenses during pathogen attack, we studied the interaction between transgenic antisense tobacco plants with reduced APX or CAT and a bacterial pathogen that triggers the hypersensitive response. Transgenic plants with reduced capability to detoxify ROI (i.e., antisense APX or CAT) were found to be hyperresponsive to pathogen attack. They activated PCD in response to low amounts of pathogens that did not trigger the activation of PCD in control plants. Our findings support the hypothesis that suppression of ROI-scavenging enzymes during the hypersensitive response plays an important role in enhancing pathogen-induced PCD.
Resumo:
Nucleotide excision repair proteins have been implicated in genetic recombination by experiments in Saccharomyces cerevisiae and Drosophila melanogaster, but their role, if any, in mammalian cells is undefined. To investigate the role of the nucleotide excision repair gene ERCC1, the hamster homologue to the S. cerevisiae RAD10 gene, we disabled the gene by targeted knockout. Partial tandem duplications of the adenine phosphoribosyltransferase (APRT) gene then were constructed at the endogenous APRT locus in ERCC1− and ERCC1+ cells. To detect the full spectrum of gene-altering events, we used a loss-of-function assay in which the parental APRT+ tandem duplication could give rise to APRT− cells by homologous recombination, gene rearrangement, or point mutation. Measurement of rates and analysis of individual APRT− products indicated that gene rearrangements (principally deletions) were increased at least 50-fold, whereas homologous recombination was affected little. The formation of deletions is not caused by a general effect of the ERCC1 deficiency on gene stability, because ERCC1− cell lines with a single wild-type copy of the APRT gene yielded no increase in deletions. Thus, deletion formation is dependent on the tandem duplication, and presumably the process of homologous recombination. Recombination-dependent deletion formation in ERCC1− cells is supported by a significant decrease in a particular class of crossover products that are thought to arise by repair of a heteroduplex intermediate in recombination. We suggest that the ERCC1 gene product in mammalian cells is involved in the processing of heteroduplex intermediates in recombination and that the misprocessed intermediates in ERCC1− cells are repaired by illegitimate recombination.
Resumo:
To determine the dynamics of transcript extrusion from Escherichia coli RNA polymerase (RNAP), we used degradation of the RNA by RNases T1 and A in a series of consecutive elongation complexes (ECs). In intact ECs, even extremely high doses of the RNases were unable to cut the RNA closer than 14–16 nt from the 3′ end. Our results prove that all of the cuts detected within the 14-nt zone are derived from the EC that is denatured during inactivation of the RNases. The protected zone monotonously translocates along the RNA after addition of new nucleotides to the transcript. The upstream region of the RNA heading toward the 5′ end is cleaved and dissociated from the EC, with no effect on the stability and activity of the EC. Most of the current data suggest that an 8- to 10-nt RNA⋅DNA hybrid is formed in the EC. Here, we show that an 8- to 10-nt RNA obtained by truncating the RNase-generated products further with either GreB or pyrophosphate is sufficient for the high stability and activity of the EC. This result suggests that the transcript–RNAP interaction that is required for holding the EC together can be limited to the RNA region involved in the 8- to 10-nt RNA⋅DNA hybrid.
Resumo:
Polyhydroxyalkanoate (PHA) is a family of polymers composed primarily of R-3-hydroxyalkanoic acids. These polymers have properties of biodegradable thermoplastics and elastomers. Medium-chain-length PHAs (MCL-PHAs) are synthesized in bacteria by using intermediates of the β-oxidation of alkanoic acids. To assess the feasibility of producing MCL-PHAs in plants, Arabidopsis thaliana was transformed with the PhaC1 synthase from Pseudomonas aeruginosa modified for peroxisome targeting by addition of the carboxyl 34 amino acids from the Brassica napus isocitrate lyase. Immunocytochemistry demonstrated that the modified PHA synthase was appropriately targeted to leaf-type peroxisomes in light-grown plants and glyoxysomes in dark-grown plants. Plants expressing the PHA synthase accumulated electron-lucent inclusions in the glyoxysomes and leaf-type peroxisomes, as well as in the vacuole. These inclusions were similar to bacterial PHA inclusions. Analysis of plant extracts by GC and mass spectrometry demonstrated the presence of MCL-PHA in transgenic plants to approximately 4 mg per g of dry weight. The plant PHA contained saturated and unsaturated 3-hydroxyalkanoic acids ranging from six to 16 carbons with 41% of the monomers being 3-hydroxyoctanoic acid and 3-hydroxyoctenoic acid. These results indicate that the β-oxidation of plant fatty acids can generate a broad range of R-3-hydroxyacyl-CoA intermediates that can be used to synthesize MCL-PHAs.
Resumo:
Rab2 immunolocalizes to pre-Golgi intermediates (vesicular-tubular clusters [VTCs]) that are the first site of segregation of anterograde- and retrograde-transported proteins and a major peripheral site for COPI recruitment. Our previous work showed that Rab2 Q65L (equivalent to Ras Q61L) inhibited endoplasmic reticulum (ER)-to-Golgi transport in vivo. In this study, the biochemical properties of Rab2 Q65L were analyzed. The mutant protein binds GDP and GTP and has a low GTP hydrolysis rate that suggests that Rab2 Q65L is predominantly in the GTP-bound–activated form. The purified protein arrests vesicular stomatitis virus glycoprotein transport from VTCs in an assay that reconstitutes ER-to-Golgi traffic. A quantitative binding assay was used to measure membrane binding of β-COP when incubated with the mutant. Unlike Rab2 that stimulates recruitment, Rab2 Q65L showed a dose-dependent decrease in membrane-associated β-COP when incubated with rapidly sedimenting membranes (ER, pre-Golgi, and Golgi). The mutant protein does not interfere with β-COP binding but stimulates the release of slowly sedimenting vesicles containing Rab2, β-COP, and p53/gp58 but lacking anterograde grade-directed cargo. To complement the biochemical results, we observed in a morphological assay that Rab2 Q65L caused vesiculation of VTCs that accumulated at 15°C. These data suggest that the Rab2 protein plays a role in the low-temperature–sensitive step that regulates membrane flow from VTCs to the Golgi complex and back to the ER.
Resumo:
In this communication, we report our femtosecond real-time observation of the dynamics for the three didehydrobenzene molecules (p-, m-, and o-benzyne) generated from 1,4-, 1,3-, and 1,2-dibromobenzene, respectively, in a molecular beam, by using femtosecond time-resolved mass spectrometry. The time required for the first and the second C-Br bond breakage is less than 100 fs; the benzyne molecules are produced within 100 fs and then decay with a lifetime of 400 ps or more. Density functional theory and high-level ab initio calculations are also reported herein to elucidate the energetics along the reaction path. We discuss the dynamics and possible reaction mechanisms for the disappearance of benzyne intermediates. Our effort focuses on the isolated molecule dynamics of the three isomers on the femtosecond time scale.
Resumo:
Barnase is one of the few protein models that has been studied extensively for protein folding. Previous studies led to the conclusion that barnase folds through a very stable submillisecond intermediate (≈3 kcal/mol). The structure of this intermediate was characterized intensively by using a protein engineering approach. This intermediate has now been reexamined with three direct and independent methods. (i) Hydrogen exchange experiments show very small protection factors (≈2) for the putative intermediate, indicating a stability of ≈0.0 kcal/mol. (ii) Denaturant-dependent unfolding of the putative intermediate is noncooperative and indicates a stability less than 0.0 kcal/mol. (iii) The logarithm of the unfolding rate constant of native barnase vs. denaturant concentrations is not linear. Together with the measured rate (“I” to N), this nonlinear behavior accounts for almost all of the protein stability, leaving only about 0.3 kcal/mol that could be attributed to the rapidly formed intermediate. Other observations previously interpreted to support the presence of an intermediate are now known to have alternative explanations. These results cast doubts on the previous conclusions on the nature of the early folding state in barnase and therefore should have important implications in understanding the early folding events of barnase and other proteins in general.