926 resultados para Scaling Of Chf
Resumo:
The rate- and state-dependent constitutive formulation for fault slip characterizes an exceptional variety of materials over a wide range of sliding conditions. This formulation provides a unified representation of diverse sliding phenomena including slip weakening over a characteristic sliding distance Dc, apparent fracture energy at a rupture front, time-dependent healing after rapid slip, and various other transient and slip rate effects. Laboratory observations and theoretical models both indicate that earthquake nucleation is accompanied by long intervals of accelerating slip. Strains from the nucleation process on buried faults generally could not be detected if laboratory values of Dc apply to faults in nature. However, scaling of Dc is presently an open question and the possibility exists that measurable premonitory creep may precede some earthquakes. Earthquake activity is modeled as a sequence of earthquake nucleation events. In this model, earthquake clustering arises from sensitivity of nucleation times to the stress changes induced by prior earthquakes. The model gives the characteristic Omori aftershock decay law and assigns physical interpretation to aftershock parameters. The seismicity formulation predicts large changes of earthquake probabilities result from stress changes. Two mechanisms for foreshocks are proposed that describe observed frequency of occurrence of foreshock-mainshock pairs by time and magnitude. With the first mechanism, foreshocks represent a manifestation of earthquake clustering in which the stress change at the time of the foreshock increases the probability of earthquakes at all magnitudes including the eventual mainshock. With the second model, accelerating fault slip on the mainshock nucleation zone triggers foreshocks.
Resumo:
The cytosolic phosphorylation ratio ([ATP]/[ADP][P(i)]) in the mammalian heart was found to be inversely related to body mass with an exponent of -0.30 (r = 0.999). This exponent is similar to -0.25 calculated for the mass-specific O2 consumption. The inverse of cytosolic free [ADP], the Gibbs energy of ATP hydrolysis (delta G'ATP), and the efficiency of ATP production (energy captured in forming 3 mol of ATP per cycle along the mitochondrial respiratory chain from NADH to 1/2 O2) were all found to scale with body mass with a negative exponent. On the basis of scaling of the phosphorylation ratio and free cytosolic [ADP], we propose that the myocardium and other tissues of small mammals represent a metabolic system with a higher driving potential (a higher delta G'ATP from the higher [ATP]/[ADP][P(i)]) and a higher kinetic gain [(delta V/Vmax)/delta [ADP]] where small changes in free [ADP] produce large changes in steady-state rates of O2 consumption. From the inverse relationship between mitochondrial efficiency and body size we calculate that tissues of small mammals are more efficient than those of large mammals in converting energy from the oxidation of foodstuffs to the bond energy of ATP. A higher efficiency also indicates that mitochondrial electron transport is not the major site for higher heat production in small mammals. We further propose that the lower limit of about 2 g for adult endotherm body size (bumblebee-bat, Estrucan shrew, and hummingbird) may be set by the thermodynamics of the electron transport chain. The upper limit for body size (100,000-kg adult blue whale) may relate to a minimum delta G'ATP of approximately 55 kJ/mol for a cytoplasmic phosphorylation ratio of 12,000 M-1.
Resumo:
Climatic changes are most pronounced in northern high latitude regions. Yet, there is a paucity of observational data, both spatially and temporally, such that regional-scale dynamics are not fully captured, limiting our ability to make reliable projections. In this study, a group of dynamical downscaling products were created for the period 1950 to 2100 to better understand climate change and its impacts on hydrology, permafrost, and ecosystems at a resolution suitable for northern Alaska. An ERA-interim reanalysis dataset and the Community Earth System Model (CESM) served as the forcing mechanisms in this dynamical downscaling framework, and the Weather Research & Forecast (WRF) model, embedded with an optimization for the Arctic (Polar WRF), served as the Regional Climate Model (RCM). This downscaled output consists of multiple climatic variables (precipitation, temperature, wind speed, dew point temperature, and surface air pressure) for a 10 km grid spacing at three-hour intervals. The modeling products were evaluated and calibrated using a bias-correction approach. The ERA-interim forced WRF (ERA-WRF) produced reasonable climatic variables as a result, yielding a more closely correlated temperature field than precipitation field when long-term monthly climatology was compared with its forcing and observational data. A linear scaling method then further corrected the bias, based on ERA-interim monthly climatology, and bias-corrected ERA-WRF fields were applied as a reference for calibration of both the historical and the projected CESM forced WRF (CESM-WRF) products. Biases, such as, a cold temperature bias during summer and a warm temperature bias during winter as well as a wet bias for annual precipitation that CESM holds over northern Alaska persisted in CESM-WRF runs. The linear scaling of CESM-WRF eventually produced high-resolution downscaling products for the Alaskan North Slope for hydrological and ecological research, together with the calibrated ERA-WRF run, and its capability extends far beyond that. Other climatic research has been proposed, including exploration of historical and projected climatic extreme events and their possible connections to low-frequency sea-atmospheric oscillations, as well as near-surface permafrost degradation and ice regime shifts of lakes. These dynamically downscaled, bias corrected climatic datasets provide improved spatial and temporal resolution data necessary for ongoing modeling efforts in northern Alaska focused on reconstructing and projecting hydrologic changes, ecosystem processes and responses, and permafrost thermal regimes. The dynamical downscaling methods presented in this study can also be used to create more suitable model input datasets for other sub-regions of the Arctic.
Resumo:
Thesis (Ph.D.)--University of Washington, 2016-06
Resumo:
Human urotensin-II (hU-II) is the most potent endogenous cardiostimulant identified to date. We therefore determined whether hU-II has a possible pathological role by investigating its levels in patients with congestive heart failure (CHF). Blood samples were obtained from the aortic root, femoral artery, femoral vein, and pulmonary artery from CHF patients undergoing cardiac catheterization and the aortic root from patients undergoing investigative angiography for chest pain who were not in heart failure. Immunoreactive hU-II (hU-II-ir) levels were determined with radioimmunoassay. hU-II-ir was elevated in the aortic root of CHF patients (230.9 +/- 68.7 pg/ml, n = 21; P < 0.001) vs. patients with nonfailing hearts (22.7 +/- 6.1 pg/ml, n = 18). This increase was attributed to cardiopulmonary production of hU-II-ir because levels were lower in the pulmonary artery (38.2 +/- 6.1 pg/ml, n = 21; P < 0.001) than in the aortic root. hU-II-ir was elevated in the aortic root of CHF patients with nonischemic cardiomyopathy (142.1 +/- 51.5 pg/ml, n = 10; P < 0.05) vs. patients with nonfailing hearts without coronary artery disease (27.3 +/- 12.4 pg/ml, n = 7) and CHF patients with ischemic cardiomyopathy (311.6 +/- 120.4 pg/ml, n = 11; P < 0.001) vs. patients with nonfailing hearts and coronary artery disease (19.8 +/- 6.6 pg/ml, n = 11). hU-II-ir was significantly higher in the aortic root than in the pulmonary artery and femoral vein, with a nonsignificant trend for higher levels in the aortic root than in the femoral artery. The findings indicated that hU-II-ir is elevated in the aortic root of CHF patients and that hU-II-ir is cleared at least in part from the microcirculation.
Resumo:
We provide here a detailed theoretical explanation of the floating molecule or levitation effect, for molecules diffusing through nanopores, using the oscillator model theory (Phys. Rev. Lett. 2003, 91, 126102) recently developed in this laboratory. It is shown that on reduction of pore size the effect occurs due to decrease in frequency of wall collision of diffusing particles at a critical pore size. This effect is, however, absent at high temperatures where the ratio of kinetic energy to the solid-fluid interaction strength is sufficiently large. It is shown that the transport diffusivities scale with this ratio. Scaling of transport diffusivities with respect to mass is also observed, even in the presence of interactions.
Resumo:
* Chronic heart failure (CHF) is found in 1.5%–2.0% of Australians. Considered rare in people aged less than 45 years, its prevalence increases to over 10% in people aged ≥ 65 years. * CHF is one of the most common reasons for hospital admission and general practitioner consultation in the elderly (≥ 70 years). * Common causes of CHF are ischaemic heart disease (present in > 50% of new cases), hypertension (about two-thirds of cases) and idiopathic dilated cardiomyopathy (around 5%–10% of cases). * Diagnosis is based on clinical features, chest x-ray and objective measurement of ventricular function (eg, echocardiography). Plasma levels of B-type natriuretic peptide (BNP) may have a role in diagnosis, primarily as a test for exclusion. Diagnosis may be strengthened by a beneficial clinical response to treatment(s) directed towards amelioration of symptoms. * Management involves prevention, early detection, amelioration of disease progression, relief of symptoms, minimisation of exacerbations, and prolongation of survival.
Resumo:
The dynamics of drop formation and pinch-off have been investigated for a series of low viscosity elastic fluids possessing similar shear viscosities, but differing substantially in elastic properties. On initial approach to the pinch region, the viscoelastic fluids all exhibit the same global necking behavior that is observed for a Newtonian fluid of equivalent shear viscosity. For these low viscosity dilute polymer solutions, inertial and capillary forces form the dominant balance in this potential flow regime, with the viscous force being negligible. The approach to the pinch point, which corresponds to the point of rupture for a Newtonian fluid, is extremely rapid in such solutions, with the sudden increase in curvature producing very large extension rates at this location. In this region the polymer molecules are significantly extended, causing a localized increase in the elastic stresses, which grow to balance the capillary pressure. This prevents the necked fluid from breaking off, as would occur in the equivalent Newtonian fluid. Alternatively, a cylindrical filament forms in which elastic stresses and capillary pressure balance, and the radius decreases exponentially with time. A (0+1)-dimensional finitely extensible nonlinear elastic dumbbell theory incorporating inertial, capillary, and elastic stresses is able to capture the basic features of the experimental observations. Before the critical "pinch time" t(p), an inertial-capillary balance leads to the expected 2/3-power scaling of the minimum radius with time: R-min similar to(t(p)-t)(2/3). However, the diverging deformation rate results in large molecular deformations and rapid crossover to an elastocapillary balance for times t>t(p). In this region, the filament radius decreases exponentially with time R-min similar to exp[(t(p)-t)/lambda(1)], where lambda(1) is the characteristic time constant of the polymer molecules. Measurements of the relaxation times of polyethylene oxide solutions of varying concentrations and molecular weights obtained from high speed imaging of the rate of change of filament radius are significantly higher than the relaxation times estimated from Rouse-Zimm theory, even though the solutions are within the dilute concentration region as determined using intrinsic viscosity measurements. The effective relaxation times exhibit the expected scaling with molecular weight but with an additional dependence on the concentration of the polymer in solution. This is consistent with the expectation that the polymer molecules are in fact highly extended during the approach to the pinch region (i.e., prior to the elastocapillary filament thinning regime) and subsequently as the filament is formed they are further extended by filament stretching at a constant rate until full extension of the polymer coil is achieved. In this highly extended state, intermolecular interactions become significant, producing relaxation times far above theoretical predictions for dilute polymer solutions under equilibrium conditions. (C) 2006 American Institute of Physics
Resumo:
The recurrence interval statistics for regional seismicity follows a universal distribution function, independent of the tectonic setting or average rate of activity (Corral, 2004). The universal function is a modified gamma distribution with power-law scaling of recurrence intervals shorter than the average rate of activity and exponential decay for larger intervals. We employ the method of Corral (2004) to examine the recurrence statistics of a range of cellular automaton earthquake models. The majority of models has an exponential distribution of recurrence intervals, the same as that of a Poisson process. One model, the Olami-Feder-Christensen automaton, has recurrence statistics consistent with regional seismicity for a certain range of the conservation parameter of that model. For conservation parameters in this range, the event size statistics are also consistent with regional seismicity. Models whose dynamics are dominated by characteristic earthquakes do not appear to display universality of recurrence statistics.
Resumo:
Oggi, i dispositivi portatili sono diventati la forza trainante del mercato consumer e nuove sfide stanno emergendo per aumentarne le prestazioni, pur mantenendo un ragionevole tempo di vita della batteria. Il dominio digitale è la miglior soluzione per realizzare funzioni di elaborazione del segnale, grazie alla scalabilità della tecnologia CMOS, che spinge verso l'integrazione a livello sub-micrometrico. Infatti, la riduzione della tensione di alimentazione introduce limitazioni severe per raggiungere un range dinamico accettabile nel dominio analogico. Minori costi, minore consumo di potenza, maggiore resa e una maggiore riconfigurabilità sono i principali vantaggi dell'elaborazione dei segnali nel dominio digitale. Da più di un decennio, diverse funzioni puramente analogiche sono state spostate nel dominio digitale. Ciò significa che i convertitori analogico-digitali (ADC) stanno diventando i componenti chiave in molti sistemi elettronici. Essi sono, infatti, il ponte tra il mondo digitale e analogico e, di conseguenza, la loro efficienza e la precisione spesso determinano le prestazioni globali del sistema. I convertitori Sigma-Delta sono il blocco chiave come interfaccia in circuiti a segnale-misto ad elevata risoluzione e basso consumo di potenza. I tools di modellazione e simulazione sono strumenti efficaci ed essenziali nel flusso di progettazione. Sebbene le simulazioni a livello transistor danno risultati più precisi ed accurati, questo metodo è estremamente lungo a causa della natura a sovracampionamento di questo tipo di convertitore. Per questo motivo i modelli comportamentali di alto livello del modulatore sono essenziali per il progettista per realizzare simulazioni veloci che consentono di identificare le specifiche necessarie al convertitore per ottenere le prestazioni richieste. Obiettivo di questa tesi è la modellazione del comportamento del modulatore Sigma-Delta, tenendo conto di diverse non idealità come le dinamiche dell'integratore e il suo rumore termico. Risultati di simulazioni a livello transistor e dati sperimentali dimostrano che il modello proposto è preciso ed accurato rispetto alle simulazioni comportamentali.
Resumo:
A set of full-color images of objects is described for use in experiments investigating the effects of in-depth rotation on the identification of three-dimensional objects. The corpus contains up to 11 perspective views of 70 nameable objects. We also provide ratings of the "goodness" of each view, based on Thurstonian scaling of subjects' preferences in a paired-comparison experiment. An exploratory cluster analysis on the scaling solutions indicates that the amount of information available in a given view generally is the major determinant of the goodness of the view. For instance, objects with an elongated front-back axis tend to cluster together, and the front and back views of these objects, which do not reveal the object's major surfaces and features, are evaluated as the worst views.
Resumo:
A comprehensive coverage is crucial for communication, supply, and transportation networks, yet it is limited by the requirement of extensive infrastructure and heavy energy consumption. Here, we draw an analogy between spins in antiferromagnet and outlets in supply networks, and apply techniques from the studies of disordered systems to elucidate the effects of balancing the coverage and supply costs on the network behavior. A readily applicable, coverage optimization algorithm is derived. Simulation results show that magnetized and antiferromagnetic domains emerge and coexist to balance the need for coverage and energy saving. The scaling of parameters with system size agrees with the continuum approximation in two dimensions and the tree approximation in random graphs. Due to frustration caused by the competition between coverage and supply cost, a transition between easy and hard computation regimes is observed. We further suggest a local expansion approach to greatly simplify the message updates which shed light on simplifications in other problems. © 2014 American Physical Society.
Resumo:
Recent technological developments in the field of experimental quantum annealing have made prototypical annealing optimizers with hundreds of qubits commercially available. The experimental demonstration of a quantum speedup for optimization problems has since then become a coveted, albeit elusive goal. Recent studies have shown that the so far inconclusive results, regarding a quantum enhancement, may have been partly due to the benchmark problems used being unsuitable. In particular, these problems had inherently too simple a structure, allowing for both traditional resources and quantum annealers to solve them with no special efforts. The need therefore has arisen for the generation of harder benchmarks which would hopefully possess the discriminative power to separate classical scaling of performance with size from quantum. We introduce here a practical technique for the engineering of extremely hard spin-glass Ising-type problem instances that does not require "cherry picking" from large ensembles of randomly generated instances. We accomplish this by treating the generation of hard optimization problems itself as an optimization problem, for which we offer a heuristic algorithm that solves it. We demonstrate the genuine thermal hardness of our generated instances by examining them thermodynamically and analyzing their energy landscapes, as well as by testing the performance of various state-of-the-art algorithms on them. We argue that a proper characterization of the generated instances offers a practical, efficient way to properly benchmark experimental quantum annealers, as well as any other optimization algorithm.
Resumo:
The unprecedented and relentless growth in the electronics industry is feeding the demand for integrated circuits (ICs) with increasing functionality and performance at minimum cost and power consumption. As predicted by Moore's law, ICs are being aggressively scaled to meet this demand. While the continuous scaling of process technology is reducing gate delays, the performance of ICs is being increasingly dominated by interconnect delays. In an effort to improve submicrometer interconnect performance, to increase packing density, and to reduce chip area and power consumption, the semiconductor industry is focusing on three-dimensional (3D) integration. However, volume production and commercial exploitation of 3D integration are not feasible yet due to significant technical hurdles.
At the present time, interposer-based 2.5D integration is emerging as a precursor to stacked 3D integration. All the dies and the interposer in a 2.5D IC must be adequately tested for product qualification. However, since the structure of 2.5D ICs is different from the traditional 2D ICs, new challenges have emerged: (1) pre-bond interposer testing, (2) lack of test access, (3) limited ability for at-speed testing, (4) high density I/O ports and interconnects, (5) reduced number of test pins, and (6) high power consumption. This research targets the above challenges and effective solutions have been developed to test both dies and the interposer.
The dissertation first introduces the basic concepts of 3D ICs and 2.5D ICs. Prior work on testing of 2.5D ICs is studied. An efficient method is presented to locate defects in a passive interposer before stacking. The proposed test architecture uses e-fuses that can be programmed to connect or disconnect functional paths inside the interposer. The concept of a die footprint is utilized for interconnect testing, and the overall assembly and test flow is described. Moreover, the concept of weighted critical area is defined and utilized to reduce test time. In order to fully determine the location of each e-fuse and the order of functional interconnects in a test path, we also present a test-path design algorithm. The proposed algorithm can generate all test paths for interconnect testing.
In order to test for opens, shorts, and interconnect delay defects in the interposer, a test architecture is proposed that is fully compatible with the IEEE 1149.1 standard and relies on an enhancement of the standard test access port (TAP) controller. To reduce test cost, a test-path design and scheduling technique is also presented that minimizes a composite cost function based on test time and the design-for-test (DfT) overhead in terms of additional through silicon vias (TSVs) and micro-bumps needed for test access. The locations of the dies on the interposer are taken into consideration in order to determine the order of dies in a test path.
To address the scenario of high density of I/O ports and interconnects, an efficient built-in self-test (BIST) technique is presented that targets the dies and the interposer interconnects. The proposed BIST architecture can be enabled by the standard TAP controller in the IEEE 1149.1 standard. The area overhead introduced by this BIST architecture is negligible; it includes two simple BIST controllers, a linear-feedback-shift-register (LFSR), a multiple-input-signature-register (MISR), and some extensions to the boundary-scan cells in the dies on the interposer. With these extensions, all boundary-scan cells can be used for self-configuration and self-diagnosis during interconnect testing. To reduce the overall test cost, a test scheduling and optimization technique under power constraints is described.
In order to accomplish testing with a small number test pins, the dissertation presents two efficient ExTest scheduling strategies that implements interconnect testing between tiles inside an system on chip (SoC) die on the interposer while satisfying the practical constraint that the number of required test pins cannot exceed the number of available pins at the chip level. The tiles in the SoC are divided into groups based on the manner in which they are interconnected. In order to minimize the test time, two optimization solutions are introduced. The first solution minimizes the number of input test pins, and the second solution minimizes the number output test pins. In addition, two subgroup configuration methods are further proposed to generate subgroups inside each test group.
Finally, the dissertation presents a programmable method for shift-clock stagger assignment to reduce power supply noise during SoC die testing in 2.5D ICs. An SoC die in the 2.5D IC is typically composed of several blocks and two neighboring blocks that share the same power rails should not be toggled at the same time during shift. Therefore, the proposed programmable method does not assign the same stagger value to neighboring blocks. The positions of all blocks are first analyzed and the shared boundary length between blocks is then calculated. Based on the position relationships between the blocks, a mathematical model is presented to derive optimal result for small-to-medium sized problems. For larger designs, a heuristic algorithm is proposed and evaluated.
In summary, the dissertation targets important design and optimization problems related to testing of interposer-based 2.5D ICs. The proposed research has led to theoretical insights, experiment results, and a set of test and design-for-test methods to make testing effective and feasible from a cost perspective.
Resumo:
Organic Functionalisation, Doping and Characterisation of Semiconductor Surfaces for Future CMOS Device Applications Semiconductor materials have long been the driving force for the advancement of technology since their inception in the mid-20th century. Traditionally, micro-electronic devices based upon these materials have scaled down in size and doubled in transistor density in accordance with the well-known Moore’s law, enabling consumer products with outstanding computational power at lower costs and with smaller footprints. According to the International Technology Roadmap for Semiconductors (ITRS), the scaling of metal-oxide-semiconductor field-effect transistors (MOSFETs) is proceeding at a rapid pace and will reach sub-10 nm dimensions in the coming years. This scaling presents many challenges, not only in terms of metrology but also in terms of the material preparation especially with respect to doping, leading to the moniker “More-than-Moore”. Current transistor technologies are based on the use of semiconductor junctions formed by the introduction of dopant atoms into the material using various methodologies and at device sizes below 10 nm, high concentration gradients become a necessity. Doping, the controlled and purposeful addition of impurities to a semiconductor, is one of the most important steps in the material preparation with uniform and confined doping to form ultra-shallow junctions at source and drain extension regions being one of the key enablers for the continued scaling of devices. Monolayer doping has shown promise to satisfy the need to conformally dope at such small feature sizes. Monolayer doping (MLD) has been shown to satisfy the requirements for extended defect-free, conformal and controllable doping on many materials ranging from the traditional silicon and germanium devices to emerging replacement materials such as III-V compounds This thesis aims to investigate the potential of monolayer doping to complement or replace conventional doping technologies currently in use in CMOS fabrication facilities across the world.