965 resultados para Cure rate models


Relevância:

30.00% 30.00%

Publicador:

Resumo:

In the context of cell signaling, kinetic proofreading was introduced to explain how cells can discriminate among ligands based on a kinetic parameter, the ligand-receptor dissociation rate constant. In the kinetic proofreading model of cell signaling, responses occur only when a bound receptor undergoes a complete series of modifications. If the ligand dissociates prematurely, the receptor returns to its basal state and signaling is frustrated. We extend the model to deal with systems where aggregation of receptors is essential to signal transduction, and present a version of the model for systems where signaling depends on an extrinsic kinase. We also investigate the kinetics of signaling molecules, “messengers,” that are generated by aggregated receptors but do not remain associated with the receptor complex. We show that the extended model predicts modes of signaling that exhibit kinetic discrimination for some range of parameters but for other parameter values show little or no discrimination and thus escape kinetic proofreading. We compare model predictions with experimental data.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Retinitis pigmentosa (RP) is a group of inherited blinding diseases caused by mutations in multiple genes including RDS. RDS encodes rds/peripherin (rds), a 36-kDa glycoprotein in the rims of rod and cone outer-segment (OS) discs. Rom1 is related to rds with similar membrane topology and the identical distribution in OS. In contrast to RDS, no mutations in ROM1 alone have been associated with retinal disease. However, an unusual digenic form of RP has been described. Affected individuals in several families were doubly heterozygous for a mutation in RDS causing a leucine 185 to proline substitution in rds (L185P) and a null mutation in ROM1. Neither mutation alone caused clinical abnormalities. Here, we generated transgenic/knockout mice that duplicate the amino acid substitutions and predicted levels of rds and rom1 in patients with RDS-mediated digenic and dominant RP. Photoreceptor degeneration in the mouse model of digenic RP was faster than in the wild-type and monogenic controls by histological, electroretinographic, and biochemical analysis. We observed a positive correlation between the rate of photoreceptor loss and the extent of OS disorganization in mice of several genotypes. Photoreceptor degeneration in RDS-mediated RP appears to be caused by a simple deficiency of rds and rom1. The critical threshold for the combined abundance of rds and rom1 is ≈60% of wild type. Below this value, the extent of OS disorganization results in clinically significant photoreceptor degeneration.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Recent experiments have measured the rate of replication of DNA catalyzed by a single enzyme moving along a stretched template strand. The dependence on tension was interpreted as evidence that T7 and related DNA polymerases convert two (n = 2) or more single-stranded template bases to double helix geometry in the polymerization site during each catalytic cycle. However, we find structural data on the T7 enzyme–template complex indicate n = 1. We also present a model for the “tuning” of replication rate by mechanical tension. This model considers only local interactions in the neighborhood of the enzyme, unlike previous models that use stretching curves for the entire polymer chain. Our results, with n = 1, reconcile force-dependent replication rate studies with structural data on DNA polymerase complexes.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The friction of rocks in the laboratory is a function of time, velocity of sliding, and displacement. Although the processes responsible for these dependencies are unknown, constitutive equations have been developed that do a reasonable job of describing the laboratory behavior. These constitutive laws have been used to create a model of earthquakes at Parkfield, CA, by using boundary conditions appropriate for the section of the fault that slips in magnitude 6 earthquakes every 20-30 years. The behavior of this model prior to the earthquakes is investigated to determine whether or not the model earthquakes could be predicted in the real world by using realistic instruments and instrument locations. Premonitory slip does occur in the model, but it is relatively restricted in time and space and detecting it from the surface may be difficult. The magnitude of the strain rate at the earth's surface due to this accelerating slip seems lower than the detectability limit of instruments in the presence of earth noise. Although not specifically modeled, microseismicity related to the accelerating creep and to creep events in the model should be detectable. In fact the logarithm of the moment rate on the hypocentral cell of the fault due to slip increases linearly with minus the logarithm of the time to the earthquake. This could conceivably be used to determine when the earthquake was going to occur. An unresolved question is whether this pattern of accelerating slip could be recognized from the microseismicity, given the discrete nature of seismic events. Nevertheless, the model results suggest that the most likely solution to earthquake prediction is to look for a pattern of acceleration in microseismicity and thereby identify the microearthquakes as foreshocks.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Negli ultimi anni i modelli VAR sono diventati il principale strumento econometrico per verificare se può esistere una relazione tra le variabili e per valutare gli effetti delle politiche economiche. Questa tesi studia tre diversi approcci di identificazione a partire dai modelli VAR in forma ridotta (tra cui periodo di campionamento, set di variabili endogene, termini deterministici). Usiamo nel caso di modelli VAR il test di Causalità di Granger per verificare la capacità di una variabile di prevedere un altra, nel caso di cointegrazione usiamo modelli VECM per stimare congiuntamente i coefficienti di lungo periodo ed i coefficienti di breve periodo e nel caso di piccoli set di dati e problemi di overfitting usiamo modelli VAR bayesiani con funzioni di risposta di impulso e decomposizione della varianza, per analizzare l'effetto degli shock sulle variabili macroeconomiche. A tale scopo, gli studi empirici sono effettuati utilizzando serie storiche di dati specifici e formulando diverse ipotesi. Sono stati utilizzati tre modelli VAR: in primis per studiare le decisioni di politica monetaria e discriminare tra le varie teorie post-keynesiane sulla politica monetaria ed in particolare sulla cosiddetta "regola di solvibilità" (Brancaccio e Fontana 2013, 2015) e regola del GDP nominale in Area Euro (paper 1); secondo per estendere l'evidenza dell'ipotesi di endogeneità della moneta valutando gli effetti della cartolarizzazione delle banche sul meccanismo di trasmissione della politica monetaria negli Stati Uniti (paper 2); terzo per valutare gli effetti dell'invecchiamento sulla spesa sanitaria in Italia in termini di implicazioni di politiche economiche (paper 3). La tesi è introdotta dal capitolo 1 in cui si delinea il contesto, la motivazione e lo scopo di questa ricerca, mentre la struttura e la sintesi, così come i principali risultati, sono descritti nei rimanenti capitoli. Nel capitolo 2 sono esaminati, utilizzando un modello VAR in differenze prime con dati trimestrali della zona Euro, se le decisioni in materia di politica monetaria possono essere interpretate in termini di una "regola di politica monetaria", con specifico riferimento alla cosiddetta "nominal GDP targeting rule" (McCallum 1988 Hall e Mankiw 1994; Woodford 2012). I risultati evidenziano una relazione causale che va dallo scostamento tra i tassi di crescita del PIL nominale e PIL obiettivo alle variazioni dei tassi di interesse di mercato a tre mesi. La stessa analisi non sembra confermare l'esistenza di una relazione causale significativa inversa dalla variazione del tasso di interesse di mercato allo scostamento tra i tassi di crescita del PIL nominale e PIL obiettivo. Risultati simili sono stati ottenuti sostituendo il tasso di interesse di mercato con il tasso di interesse di rifinanziamento della BCE. Questa conferma di una sola delle due direzioni di causalità non supporta un'interpretazione della politica monetaria basata sulla nominal GDP targeting rule e dà adito a dubbi in termini più generali per l'applicabilità della regola di Taylor e tutte le regole convenzionali della politica monetaria per il caso in questione. I risultati appaiono invece essere più in linea con altri approcci possibili, come quelli basati su alcune analisi post-keynesiane e marxiste della teoria monetaria e più in particolare la cosiddetta "regola di solvibilità" (Brancaccio e Fontana 2013, 2015). Queste linee di ricerca contestano la tesi semplicistica che l'ambito della politica monetaria consiste nella stabilizzazione dell'inflazione, del PIL reale o del reddito nominale intorno ad un livello "naturale equilibrio". Piuttosto, essi suggeriscono che le banche centrali in realtà seguono uno scopo più complesso, che è il regolamento del sistema finanziario, con particolare riferimento ai rapporti tra creditori e debitori e la relativa solvibilità delle unità economiche. Il capitolo 3 analizza l’offerta di prestiti considerando l’endogeneità della moneta derivante dall'attività di cartolarizzazione delle banche nel corso del periodo 1999-2012. Anche se gran parte della letteratura indaga sulla endogenità dell'offerta di moneta, questo approccio è stato adottato raramente per indagare la endogeneità della moneta nel breve e lungo termine con uno studio degli Stati Uniti durante le due crisi principali: scoppio della bolla dot-com (1998-1999) e la crisi dei mutui sub-prime (2008-2009). In particolare, si considerano gli effetti dell'innovazione finanziaria sul canale dei prestiti utilizzando la serie dei prestiti aggiustata per la cartolarizzazione al fine di verificare se il sistema bancario americano è stimolato a ricercare fonti più economiche di finanziamento come la cartolarizzazione, in caso di politica monetaria restrittiva (Altunbas et al., 2009). L'analisi si basa sull'aggregato monetario M1 ed M2. Utilizzando modelli VECM, esaminiamo una relazione di lungo periodo tra le variabili in livello e valutiamo gli effetti dell’offerta di moneta analizzando quanto la politica monetaria influisce sulle deviazioni di breve periodo dalla relazione di lungo periodo. I risultati mostrano che la cartolarizzazione influenza l'impatto dei prestiti su M1 ed M2. Ciò implica che l'offerta di moneta è endogena confermando l'approccio strutturalista ed evidenziando che gli agenti economici sono motivati ad aumentare la cartolarizzazione per una preventiva copertura contro shock di politica monetaria. Il capitolo 4 indaga il rapporto tra spesa pro capite sanitaria, PIL pro capite, indice di vecchiaia ed aspettativa di vita in Italia nel periodo 1990-2013, utilizzando i modelli VAR bayesiani e dati annuali estratti dalla banca dati OCSE ed Eurostat. Le funzioni di risposta d'impulso e la scomposizione della varianza evidenziano una relazione positiva: dal PIL pro capite alla spesa pro capite sanitaria, dalla speranza di vita alla spesa sanitaria, e dall'indice di invecchiamento alla spesa pro capite sanitaria. L'impatto dell'invecchiamento sulla spesa sanitaria è più significativo rispetto alle altre variabili. Nel complesso, i nostri risultati suggeriscono che le disabilità strettamente connesse all'invecchiamento possono essere il driver principale della spesa sanitaria nel breve-medio periodo. Una buona gestione della sanità contribuisce a migliorare il benessere del paziente, senza aumentare la spesa sanitaria totale. Tuttavia, le politiche che migliorano lo stato di salute delle persone anziane potrebbe essere necessarie per una più bassa domanda pro capite dei servizi sanitari e sociali.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In recent years fractionally differenced processes have received a great deal of attention due to its flexibility in financial applications with long memory. This paper considers a class of models generated by Gegenbauer polynomials, incorporating the long memory in stochastic volatility (SV) components in order to develop the General Long Memory SV (GLMSV) model. We examine the statistical properties of the new model, suggest using the spectral likelihood estimation for long memory processes, and investigate the finite sample properties via Monte Carlo experiments. We apply the model to three exchange rate return series. Overall, the results of the out-of-sample forecasts show the adequacy of the new GLMSV model.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This letter presents a method to model propagation channels for estimation, in which the sampling scheme can be arbitrary. Additionally, the method yields accurate models, with a size that converges to the channel duration, measured in Nyquist periods. It can be viewed as an improvement on the usual discretization based on regular sampling at the Nyquist rate. The method is introduced in the context of multiple delay estimation using the MUSIC estimator, and is assessed through a numerical example.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper describes a study and analysis of surface normal-base descriptors for 3D object recognition. Specifically, we evaluate the behaviour of descriptors in the recognition process using virtual models of objects created from CAD software. Later, we test them in real scenes using synthetic objects created with a 3D printer from the virtual models. In both cases, the same virtual models are used on the matching process to find similarity. The difference between both experiments is in the type of views used in the tests. Our analysis evaluates three subjects: the effectiveness of 3D descriptors depending on the viewpoint of camera, the geometry complexity of the model and the runtime used to do the recognition process and the success rate to recognize a view of object among the models saved in the database.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We investigated whether children’s inhibitory control is associated with their ability to produce irregular verb forms as well as learn from corrective feedback following their use of an over-regularized form. Forty-eight 3.5 to 4.5 year old children were tested on the irregular past tense and provided with adult corrective input via models of correct use or recasts of errors following ungrammatical responses. Inhibitory control was assessed with a three-item battery of tasks that required suppressing a prepotent response in favor of a non-canonical one. Results showed that inhibitory control was predictive of children’s initial production of irregular forms and not associated with their post-feedback production of irregulars. These findings show that children’s executive functioning skills may be a rate-limiting factor on their ability to produce correct forms, but might not interact with their ability to learn from input in this domain. Findings are discussed in terms of current theories of past-tense acquisition and learning from input more broadly.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Implant failures and postoperative complications are often associated to the bone drilling. Estimation and control of drilling parameters are critical to prevent mechanical damage to the bone tissues. For better performance of the drilling procedures, it is essential to understand the mechanical behaviour of bones that leads to their failures and consequently to improve the cutting conditions. This paper investigates the effect of drill speed and feed-rate on mechanical damage during drilling of solid rigid foam materials, with similar mechanical properties to the human bone. Experimental tests were conducted on biomechanical blocks instrumented with strain gauges to assess the drill speed and feed-rate influence. A three-dimensional dynamic finite element model to predict the bone stresses, as a function of drilling conditions, drill geometry and bone model, was developed. These simulations incorporate the dynamic characteristics involved in the drilling process. The element removal scheme is taken into account and allows advanced simulations of tool penetration and material removal. Experimental and numerical results show that generated stresses in the material tend to increase with tool penetration. Higher drill speed leads to an increase of von-Mises stresses and strains in the solid rigid foams. However, when the feed-rate is higher, the stresses and strains are lower. The numerical normal stresses and strains are found to be in good agreement with experimental results. The models could be an accurate analysis tool to simulate the stresses distribution in the bone during the drilling process.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Wood is a natural and traditional building material, as popular today as ever, and presents advantages. Physically, wood is strong and stiff, but compared with other materials like steel is light and flexible. Wood material can absorb sound very effectively and it is a relatively good heat insulator. But dry wood burns quite easily and produces a great deal of heat energy. The main disadvantage is the high level of combustion when exposed to fire, where the point at which it catches fire is from 200–400°C. After fire exposure, is need to determine if the charred wooden structures are safe for future use. Design methods require the use of computer modelling to predict the fire exposure and the capacity of structures to resist those action. Also, large or small scale experimental tests are necessary to calibrate and verify the numerical models. The thermal model is essential for wood structures exposed to fire, because predicts the charring rate as a function of fire exposure. The charring rate calculation of most structural wood elements allows simple calculations, but is more complicated for situations where the fire exposure is non-standard and in wood elements protected with other materials. In this work, the authors present different case studies using numerical models, that will help professionals analysing woods elements and the type of information needed to decide whether the charred structures are adequate or not to use. Different thermal models representing wooden cellular slabs, used in building construction for ceiling or flooring compartments, will be analysed and submitted to different fire scenarios (with the standard fire curve exposure). The same numerical models, considering insulation material inside the wooden cellular slabs, will be tested to compare and determine the fire time resistance and the charring rate calculation.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Wood is a natural and traditional building material, as popular today as ever, and presents advantages. Physically, wood is strong and stiff, but compared with other materiais like steel is light and flexible. Wood material can absorb sound very effectively and it is a relatively good heat insulator. But dry wood does bum quite easily md produces a great deal ofheat energy. The main disadvantage is the high levei ofcombustion when exposed to fíre, where the point at which it catches fire is fi-om 200-400°C. After fu-e exposure, is need to determine if the charred wooden stmctures are safe for future use. Design methods require the use ofcomputer modelling to predict the fíre exposure and the capacity ofstructures to resist fhose action. Also, large or small scale experimental tests are necessary to calibrate and verify the numerical models. The thermal model is essential for wood stmctures exposed to fire, because predicts the charring rate as a fünction offire exposure. The charring rate calculation ofmost stmctural wood elements allows simple calculations, but is more complicated for situations where the fire exposure is non-standard and in wood elements protected with other materiais.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper employs fifteen dynamic macroeconomic models maintained within the European System of Central Banks to assess the size of fiscal multipliers in European countries. Using a set of common simulations, we consider transitory and permanent shocks to government expenditures and different taxes. We investigate how the baseline multipliers change when monetary policy is transitorily constrained by the zero nominal interest rate bound, certain crisis-related structural features of the economy such as the share of liquidity-constrained households change, and the endogenous fiscal rule that ensures fiscal sustainability in the long run is specified in terms of labour income taxes instead of lump-sum taxes.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We performed a quantitative comparison of brittle thrust wedge experiments to evaluate the variability among analogue models and to appraise the reproducibility and limits of model interpretation. Fifteen analogue modeling laboratories participated in this benchmark initiative. Each laboratory received a shipment of the same type of quartz and corundum sand and all laboratories adhered to a stringent model building protocol and used the same type of foil to cover base and sidewalls of the sandbox. Sieve structure, sifting height, filling rate, and details on off-scraping of excess sand followed prescribed procedures. Our analogue benchmark shows that even for simple plane-strain experiments with prescribed stringent model construction techniques, quantitative model results show variability, most notably for surface slope, thrust spacing and number of forward and backthrusts. One of the sources of the variability in model results is related to slight variations in how sand is deposited in the sandbox. Small changes in sifting height, sifting rate, and scraping will result in slightly heterogeneous material bulk densities, which will affect the mechanical properties of the sand, and will result in lateral and vertical differences in peak and boundary friction angles, as well as cohesion values once the model is constructed. Initial variations in basal friction are inferred to play the most important role in causing model variability. Our comparison shows that the human factor plays a decisive role, and even when one modeler repeats the same experiment, quantitative model results still show variability. Our observations highlight the limits of up-scaling quantitative analogue model results to nature or for making comparisons with numerical models. The frictional behavior of sand is highly sensitive to small variations in material state or experimental set-up, and hence, it will remain difficult to scale quantitative results such as number of thrusts, thrust spacing, and pop-up width from model to nature.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Bulk sediment accumulation rates and carbonate and carbonate-free accumulation rates corrected for tectonic tilting have been calculated for Leg 78A sediments. These rates are uniformly low, ranging from 0.1 to 6.8 g/(cm**2 x 10**3 yr.), reflecting the pelagic-hemipelagic nature of all the sediments drilled in the northern Lesser Antilles forearc. Rates calculated for Sites 541 and 542 [0.6-6.8 g/(cm**2 x 10**3 yr.)], located on the lower slope of the accretionary prism, are significantly greater than the Neogene rates calculated for oceanic reference Site 543 [0.1-2.4 g/(cm**2 x 10**3)]. This difference could be the result of (1) tectonic thickening of accretionary prism sediments due to folding, small-scale faulting, and layer-parallel shortening; (2) deposition in shallower water farther above the CCD (carbonate compensation depth) resulting in preservation of a greater percentage of calcareous microfossils; or (3) a greater percentage of foraminiferal sediment gravity flows. Terrigenous turbidites are not documented in the Leg 78A area because of (1) great distance from South American sources; (2) damming effects of east-west trending tectonic elements; and (3) location on the Tiburon Rise (Site 543). This lack of terrigenous material, characteristic of intraoceanic convergent margins, suggests that published sedimentation models for active continental convergent margins with abundant terrigenous influxes are not applicable to intraoceanic convergent margin settings.