965 resultados para Cure rate models
Resumo:
Substantial retreat or disintegration of numerous ice shelves have been observed on the Antarctic Peninsula. The ice shelf in the Prince Gustav Channel retreated gradually since the late 1980's and broke-up in 1995. Tributary glaciers reacted with speed-up, surface lowering and increased ice discharge, consequently contributing to sea level rise. We present a detailed long-term study (1993-2014) on the dynamic response of Sjögren Inlet glaciers to the disintegration of Prince Gustav Ice Shelf. We analyzed various remote sensing datasets to observe the reactions of the glaciers to the loss of the buttressing ice shelf. A strong increase in ice surface velocities was observed with maximum flow speeds reaching 2.82±0.48 m/d in 2007 and 1.50±0.32 m/d in 2004 at Sjögren and Boydell glaciers respectively. Subsequently, the flow velocities decelerated, however in late 2014, we still measured about two times the values of our first measurements in 1996. The tributary glaciers retreated 61.7±3.1 km² behind the former grounding line of the ice shelf. In regions below 1000 m a.s.l., a mean surface lowering of -68±10 m (-3.1 m/a) was observed in the period 1993-2014. The lowering rate decreased to -2.2 m/a in recent years. Based on the surface lowering rates, geodetic mass balances of the glaciers were derived for different time steps. High mass loss rate of -1.21±0.36 Gt/a was found in the earliest period (1993-2001). Due to the dynamic adjustments of the glaciers to the new boundary conditions the ice mass loss reduced to -0.59±0.11 Gt/a in the period 2012-2014, resulting in an average mass loss rate of -0.89±0.16 Gt/a (1993-2014). Including the retreat of the ice front and grounding line, a total mass change of -38.5±7.7 Gt and a contribution to sea level rise of 0.061±0.013 mm were computed. Analysis of the ice flux revealed that available bedrock elevation estimates at Sjögren Inlet are too shallow and are the major uncertainty in ice flux computations. This temporally dense time series analysis of Sjögren Inlet glaciers shows that the adjustments of tributary glaciers to ice shelf disintegration are still going on and provides detailed information of the changes in glacier dynamics.
Resumo:
Late Pleistocene signals of calcium carbonate, organic carbon, and opaline silica concentration and accumulation are documented in a series of cores from a zonal/meridional/depth transect in the equatorial Atlantic Ocean to reconstruct the regional sedimentary history. Spectral analysis reveals that maxima and minima in biogenous sedimentation occur with glacial-interglacial cyclicity as a function of both (1) primary production at the sea surface modulated by orbitally forced variation in trade wind zonality and (2) destruction at the seafloor by variation in the chemical character of advected intermediate and deep water from high latitudes modulated by high-latitude ice volume. From these results a pattern emerges in which the relative proportion of signal variance from the productivity signal centered on the precessional (23 kyr) band decreases while that of the destruction signal centered on the obliquity (41 kyr) and eccentricity (100 kyr) periods increases below ~3600-m ocean depth.
Resumo:
"April 1981."
Resumo:
Thesis (Ph.D.)--University of Washington, 2016-06
Resumo:
Thesis (Ph.D.)--University of Washington, 2016-06
Resumo:
Modelling of froth transportation, as part of modelling of froth recovery, provides a scale-up procedure for flotation cell design. It can also assist in improving control of flotation operation. Mathematical models of froth velocity on the surface and froth residence time distribution in a cylindrical tank flotation cell are proposed, based on mass balance principle of the air entering the froth. The models take into account factors such as cell size, concentrate launder configuration, use of a froth crowder, cell operating conditions including froth height and air rate, and bubble bursting on the surface. (C) 2004 Elsevier Ltd. All rights reserved.
Resumo:
An important and common problem in microarray experiments is the detection of genes that are differentially expressed in a given number of classes. As this problem concerns the selection of significant genes from a large pool of candidate genes, it needs to be carried out within the framework of multiple hypothesis testing. In this paper, we focus on the use of mixture models to handle the multiplicity issue. With this approach, a measure of the local FDR (false discovery rate) is provided for each gene. An attractive feature of the mixture model approach is that it provides a framework for the estimation of the prior probability that a gene is not differentially expressed, and this probability can subsequently be used in forming a decision rule. The rule can also be formed to take the false negative rate into account. We apply this approach to a well-known publicly available data set on breast cancer, and discuss our findings with reference to other approaches.
Resumo:
An important and common problem in microarray experiments is the detection of genes that are differentially expressed in a given number of classes. As this problem concerns the selection of significant genes from a large pool of candidate genes, it needs to be carried out within the framework of multiple hypothesis testing. In this paper, we focus on the use of mixture models to handle the multiplicity issue. With this approach, a measure of the local false discovery rate is provided for each gene, and it can be implemented so that the implied global false discovery rate is bounded as with the Benjamini-Hochberg methodology based on tail areas. The latter procedure is too conservative, unless it is modified according to the prior probability that a gene is not differentially expressed. An attractive feature of the mixture model approach is that it provides a framework for the estimation of this probability and its subsequent use in forming a decision rule. The rule can also be formed to take the false negative rate into account.
Resumo:
Pulse oximetry is commonly used as an arterial blood oxygen saturation (SaO(2)) measure. However, its other serial output, the photoplethysmography (PPG) signal, is not as well studied. Raw PPG signals can be used to estimate cardiovascular measures like pulse transit time (PTT) and possibly heart rate (HR). These timing-related measurements are heavily dependent on the minimal variability in phase delay of the PPG signals. Masimo SET (R) Rad-9 (TM) and Novametrix Oxypleth oximeters were investigated for their PPG phase characteristics on nine healthy adults. To facilitate comparison, PPG signals were acquired from fingers on the same hand in a random fashion. Results showed that mean PTT variations acquired from the Masimo oximeter (37.89 ms) were much greater than the Novametrix (5.66 ms). Documented evidence suggests that I ms variation in PTT is equivalent to I mmHg change in blood pressure. Moreover, the PTT trend derived from the Masimo oximeter can be mistaken as obstructive sleep apnoeas based on the known criteria. HR comparison was evaluated against estimates attained from an electrocardiogram (ECG). Novametrix differed from ECG by 0.71 +/- 0.58% (p < 0.05) while Masimo differed by 4.51 +/- 3.66% (p > 0.05). Modem oximeters can be attractive for their improved SaO(2) measurement. However, using raw PPG signals obtained directly from these oximeters for timing-related measurements warrants further investigations.
Resumo:
Thermosetting blends of an aliphatic epoxy resin and a hydroxyl-functionalized hyperbranched polymer (HBP), aliphatic hyperbranched polyester Boltorn H40, were prepared using 4,4'-diaminodiphenylmethane (DDM) as the curing agent. The phase behavior and morphology of the DDM-cured epoxy/HBP blends with HBP content up to 40 wt% were investigated by differential scanning calorimetry (DSC), dynamic mechanical analysis (DMA), and scanning electron microscopy (SEM). The cured epoxy/HBP blends are immiscible and exhibit two separate glass transitions, as revealed by DMA. The SEM observation showed that there exist two phases in the cured blends, which is an epoxy-rich phase and an HBP-rich phase, which is responsible for the two separate glass transitions. The phase morphology was observed to be dependent on the blend composition. For the blends with HBP content up to 10 wt%, discrete HBP domains are dispersed in the continuous cured epoxy matrix, whereas the cured blend with 40 wt% HBP exhibits a combined morphology of connected globules and bicominuous phase structure. Porous epoxy thermosets with continuous open structures on the order of 100-300 nm were formed after the HBP-rich phase was extracted with solvent from the cured blend with 40 wt% HBP. The DSC study showed that the curing rate is not obviously affected in the epoxy/HBP blends with HBP content up to 40 wt %. The activation energy values obtained are not remarkably changed in the blends; the addition of HBP to epoxy resin thus does not change the mechanism of cure reaction of epoxy resin with DDM. (c) 2006 Wiley Periodicals, Inc.
Resumo:
An important and common problem in microarray experiments is the detection of genes that are differentially expressed in a given number of classes. As this problem concerns the selection of significant genes from a large pool of candidate genes, it needs to be carried out within the framework of multiple hypothesis testing. In this paper, we focus on the use of mixture models to handle the multiplicity issue. With this approach, a measure of the local FDR (false discovery rate) is provided for each gene. An attractive feature of the mixture model approach is that it provides a framework for the estimation of the prior probability that a gene is not differentially expressed, and this probability can subsequently be used in forming a decision rule. The rule can also be formed to take the false negative rate into account. We apply this approach to a well-known publicly available data set on breast cancer, and discuss our findings with reference to other approaches.
Resumo:
Traditional sensitivity and elasticity analyses of matrix population models have been used to p inform management decisions, but they ignore the economic costs of manipulating vital rates. For exam le, the growth rate of a population is often most sensitive to changes in adult survival rate, but this does not mean that increasing that rate is the best option for managing the population because it may be much more expensive than other options. To explore how managers should optimize their manipulation of vital rates, we incorporated the cost of changing those rates into matrix population models. We derived analytic expressions for locations in parameter space where managers should shift between management of fecundity and survival, for the balance between fecundity and survival management at those boundaries, and for the allocation of management resources to sustain that optimal balance. For simple matrices, the optimal budget allocation can often be expressed as simple functions of vital rates and the relative costs of changing them. We applied our method to management of the Helmeted Honeyeater (Lichenostomus melanops cassidix; an endangered Australian bird) and the koala (Phascolarctos cinereus) as examples. Our method showed that cost-efficient management of the Helmeted Honeyeater should focus on increasing fecundity via nest protection, whereas optimal koala management should focus on manipulating both fecundity and survival simultaneously, These findings are contrary to the cost-negligent recommendations of elasticity analysis, which would suggest focusing on managing survival in both cases. A further investigation of Helmeted Honeyeater management options, based on an individual-based model incorporating density dependence, spatial structure, and environmental stochasticity, confirmed that fecundity management was the most cost-effective strategy. Our results demonstrate that decisions that ignore economic factors will reduce management efficiency.
Resumo:
We have developed a way to represent Mohr-Coulomb failure within a mantle-convection fluid dynamics code. We use a viscous model of deformation with an orthotropic viscoplasticity (a different viscosity is used for pure shear to that used for simple shear) to define a prefered plane for slip to occur given the local stress field. The simple-shear viscosity and the deformation can then be iterated to ensure that the yield criterion is always satisfied. We again assume the Boussinesq approximation, neglecting any effect of dilatancy on the stress field. An additional criterion is required to ensure that deformation occurs along the plane aligned with maximum shear strain-rate rather than the perpendicular plane, which is formally equivalent in any symmetric formulation. We also allow for strain-weakening of the material. The material can remember both the accumulated failure history and the direction of failure. We have included this capacity in a Lagrangian-integration-point finite element code and show a number of examples of extension and compression of a crustal block with a Mohr-Coulomb failure criterion. The formulation itself is general and applies to 2- and 3-dimensional problems.
Resumo:
The recurrence interval statistics for regional seismicity follows a universal distribution function, independent of the tectonic setting or average rate of activity (Corral, 2004). The universal function is a modified gamma distribution with power-law scaling of recurrence intervals shorter than the average rate of activity and exponential decay for larger intervals. We employ the method of Corral (2004) to examine the recurrence statistics of a range of cellular automaton earthquake models. The majority of models has an exponential distribution of recurrence intervals, the same as that of a Poisson process. One model, the Olami-Feder-Christensen automaton, has recurrence statistics consistent with regional seismicity for a certain range of the conservation parameter of that model. For conservation parameters in this range, the event size statistics are also consistent with regional seismicity. Models whose dynamics are dominated by characteristic earthquakes do not appear to display universality of recurrence statistics.
Resumo:
This paper describes how modern machine learning techniques can be used in conjunction with statistical methods to forecast short term movements in exchange rates, producing models suitable for use in trading. It compares the results achieved by two different techniques, and shows how they can be used in a complementary fashion. The paper draws on experience of both inter- and intra-day forecasting taken from earlier studies conducted by Logica and Chemical Bank Quantitative Research and Trading (QRT) group's experience in developing trading models.