102 resultados para This is not a model
Resumo:
Positive-stranded viruses synthesize their RNA in membrane-bound organelles, but it is not clear how this benefits the virus or the host. For coronaviruses, these organelles take the form of double-membrane vesicles (DMVs) interconnected by a convoluted membrane network. We used electron microscopy to identify murine coronaviruses with mutations in nsp3 and nsp14 that replicated normally while producing only half the normal amount of DMVs. Viruses with mutations in nsp5 and nsp16 produced small DMVs but also replicated normally. Quantitative RT-PCR confirmed that the most strongly affected of these, the nsp3 mutant, produced more viral RNA than wild-type virus. Competitive growth assays were carried out in both continuous and primary cells to better understand the contribution of DMVs to viral fitness. Surprisingly, several viruses that produced fewer or smaller DMVs showed a higher relative fitness compared to wild-type virus, suggesting that larger and more numerous DMVs do not necessarily confer a competitive advantage in primary or continuous cell culture. For the first time, this directly demonstrates that replication and organelle formation may be, at least in part, studied separately during positive-stranded RNA virus infection.
Resumo:
People are often exposed to more information than they can actually remember. Despite this frequent form of information overload, little is known about how much information people choose to remember. Using a novel “stop” paradigm, the current research examined whether and how people choose to stop receiving new—possibly overwhelming—information with the intent to maximize memory performance. Participants were presented with a long list of items and were rewarded for the number of correctly remembered words in a following free recall test. Critically, participants in a stop condition were provided with the option to stop the presentation of the remaining words at any time during the list, whereas participants in a control condition were presented with all items. Across five experiments, we found that participants tended to stop the presentation of the items to maximize the number of recalled items, but this decision ironically led to decreased memory performance relative to the control group. This pattern was consistent even after controlling for possible confounding factors (e.g., task demands). The results indicated a general, false belief that we can remember a larger number of items if we restrict the quantity of learning materials. These findings suggest people have an incomplete understanding of how we remember excessive amounts of information.
Resumo:
The C-type lectin receptor CLEC-2 is expressed primarily on the surface of platelets, where it is present as a dimer, and is found at low level on a subpopulation of other hematopoietic cells, including mouse neutrophils [1–4] Clustering of CLEC-2 by the snake venom toxin rhodocytin, specific antibodies or its endogenous ligand, podoplanin, elicits powerful activation of platelets through a pathway that is similar to that used by the collagen receptor glycoprotein VI (GPVI) [4–6]. The cytosolic tail of CLEC-2 contains a conserved YxxL sequence preceded by three upstream acidic amino acid residues, which together form a novel motif known as a hemITAM. Ligand engagement induces tyrosine phosphorylation of the hemITAM sequence providing docking sites for the tandem-SH2 domains of the tyrosine kinase Syk across a CLEC-2 receptor dimer [3]. Tyrosine phosphorylation of Syk by Src family kinases and through autophosphorylation leads to stimulation of a downstream signaling cascade that culminates in activation of phospholipase C γ2 (PLCγ2) [4,6]. Recently, CLEC-2 has been proposed to play a major role in supporting activation of platelets at arteriolar rates of flow [1]. Injection of a CLEC-2 antibody into mice causes a sustained depletion of the C-type lectin receptor from the platelet surface [1]. The CLEC-2-depleted platelets were unresponsive to rhodocytin but underwent normal aggregation and secretion responses after stimulation of other platelet receptors, including GPVI [1]. In contrast, there was a marked decrease in aggregate formation relative to controls when CLEC-2-depleted blood was flowed at arteriolar rates of shear over collagen (1000 s−1 and 1700 s−1) [1]. Furthermore, antibody treatment significantly increased tail bleeding times and mice were unable to occlude their vessels after ferric chloride injury [1]. These data provide evidence for a critical role for CLEC-2 in supporting platelet aggregation at arteriolar rates of flow. The underlying mechanism is unclear as platelets do not express podoplanin, the only known endogenous ligand of CLEC-2. In the present study, we have investigated the role of CLEC-2 in platelet aggregation and thrombus formation using platelets from a novel mutant mouse model that lacks functional CLEC-2.
Resumo:
The Eph receptor tyrosine kinases interact with their ephrin ligands on adjacent cells to facilitate contact-dependent cell communication. Ephrin B ligands are expressed on T cells and have been suggested to act as co-stimulatory molecules during T cell activation. There are no detailed reports of the expression and modulation of EphB receptors on dendritic cells, the main antigen presenting cells that interact with T cells. Here we show that mouse splenic dendritic cells (DC) and bone-marrow derived DCs (BMDC) express EphB2, a member of the EphB family. EphB2 expression is modulated by ligation of TLR4 and TLR9 and also by interaction with ephrin B ligands. Co-localization of EphB2 with MHC-II is also consistent with a potential role in T cell activation. However, BMDCs derived from EphB2 deficient mice were able to present antigen in the context of MHC-II and produce T cell activating cytokines to the same extent as intact DCs. Collectively our data suggest that EphB2 may contribute to DC responses, but that EphB2 is not required for T cell activation. This result may have arisen because DCs express other members of the EphB receptor family, EphB3, EphB4 and EphB6, all of which can interact with ephrin B ligands, or because EphB2 may be playing a role in another aspect of DC biology such as migration.
Resumo:
The topography of many floodplains in the developed world has now been surveyed with high resolution sensors such as airborne LiDAR (Light Detection and Ranging), giving accurate Digital Elevation Models (DEMs) that facilitate accurate flood inundation modelling. This is not always the case for remote rivers in developing countries. However, the accuracy of DEMs produced for modelling studies on such rivers should be enhanced in the near future by the high resolution TanDEM-X WorldDEM. In a parallel development, increasing use is now being made of flood extents derived from high resolution Synthetic Aperture Radar (SAR) images for calibrating, validating and assimilating observations into flood inundation models in order to improve these. This paper discusses an additional use of SAR flood extents, namely to improve the accuracy of the TanDEM-X DEM in the floodplain covered by the flood extents, thereby permanently improving this DEM for future flood modelling and other studies. The method is based on the fact that for larger rivers the water elevation generally changes only slowly along a reach, so that the boundary of the flood extent (the waterline) can be regarded locally as a quasi-contour. As a result, heights of adjacent pixels along a small section of waterline can be regarded as samples with a common population mean. The height of the central pixel in the section can be replaced with the average of these heights, leading to a more accurate estimate. While this will result in a reduction in the height errors along a waterline, the waterline is a linear feature in a two-dimensional space. However, improvements to the DEM heights between adjacent pairs of waterlines can also be made, because DEM heights enclosed by the higher waterline of a pair must be at least no higher than the corrected heights along the higher waterline, whereas DEM heights not enclosed by the lower waterline must in general be no lower than the corrected heights along the lower waterline. In addition, DEM heights between the higher and lower waterlines can also be assigned smaller errors because of the reduced errors on the corrected waterline heights. The method was tested on a section of the TanDEM-X Intermediate DEM (IDEM) covering an 11km reach of the Warwickshire Avon, England. Flood extents from four COSMO-SKyMed images were available at various stages of a flood in November 2012, and a LiDAR DEM was available for validation. In the area covered by the flood extents, the original IDEM heights had a mean difference from the corresponding LiDAR heights of 0.5 m with a standard deviation of 2.0 m, while the corrected heights had a mean difference of 0.3 m with standard deviation 1.2 m. These figures show that significant reductions in IDEM height bias and error can be made using the method, with the corrected error being only 60% of the original. Even if only a single SAR image obtained near the peak of the flood was used, the corrected error was only 66% of the original. The method should also be capable of improving the final TanDEM-X DEM and other DEMs, and may also be of use with data from the SWOT (Surface Water and Ocean Topography) satellite.
Resumo:
This paper integrates research on child simultaneous bilingual (2L1) acquisition more directly into the heritage language (HL) acquisition literature. The 2L1 literature mostly focuses on development in childhood, whereas heritage speakers (HSs) are often tested at an endstate in adulthood. However, insights from child 2L1 acquisition must be considered in HL acquisition theorizing precisely because many HSs are the adult outcomes of child 2L1 acquisition. Data from 2L1 acquisition raises serious questions for the construct of incomplete acquisition, a term broadly used in HL acquisition studies to describe almost any difference HSs display from baseline controls (usually monolinguals). We offer an epistemological discussion related to incomplete acquisition, highlighting the descriptive and theoretical inaccuracy of the term. We focus our discussion on two of several possible causal factors that contribute to variable competence outcomes in adult HSs, input (e.g., Sorace, 2004; Rothman, 2007; Pascual y Cabo & Rothman, 2012) and formal instruction (e.g., Kupisch, 2013; Kupisch et al., 2014) in the HL. We conclude by offering alternative terminology for HS outcomes.
Resumo:
We suggest that climate variability in Europe for the “pre-industrial” period 1500–1900 is fundamentally a consequence of internal fluctuations of the climate system. This is because a model simulation, using fixed pre-industrial forcing, in several important aspects is consistent with recent observational reconstructions at high temporal resolution. This includes extreme warm and cold seasonal events as well as different measures of the decadal to multi-decadal variance. Significant trends of 50-year duration can be seen in the model simulation. While the global temperature is highly correlated with ENSO (El Nino- Southern Oscillation), European seasonal temperature is only weakly correlated with the global temperature broadly consistent with data from ERA-40 reanalyses. Seasonal temperature anomalies of the European land area are largely controlled by the position of the North Atlantic storm tracks. We believe the result is highly relevant for the interpretation of past observational records suggesting that the effect of external forcing appears to be of secondary importance. That variations in the solar irradiation could have been a credible cause of climate variations during the last centuries, as suggested in some previous studies, is presumably due to the fact that the models used in these studies may have underestimated the internal variability of the climate. The general interpretation from this study is that the past climate is just one of many possible realizations and thus in many respects not reproducible in its time evolution with a general circulation model but only reproducible in a statistical sense.
Resumo:
This paper discusses the risks of a shutdown of the thermohaline circulation (THC) for the climate system, for ecosystems in and around the North Atlantic as well as for fisheries and agriculture by way of an Integrated Assessment. The climate model simulations are based on greenhouse gas scenarios for the 21st century and beyond. A shutdown of the THC, complete by 2150, is triggered if increased freshwater input from inland ice melt or enhanced runoff is assumed. The shutdown retards the greenhouse gas-induced atmospheric warming trend in the Northern Hemisphere, but does not lead to a persistent net cooling. Due to the simulated THC shutdown the sea level at the North Atlantic shores rises by up to 80 cm by 2150, in addition to the global sea level rise. This could potentially be a serious impact that requires expensive coastal protection measures. A reduction of marine net primary productivity is associated with the impacts of warming rather than a THC shutdown. Regional shifts in the currents in the Nordic Seas could strongly deteriorate survival chances for cod larvae and juveniles. This could lead to cod fisheries becoming unprofitable by the end of the 21st century. While regional socioeconomic impacts might be large, damages would be probably small in relation to the respective gross national products. Terrestrial ecosystem productivity is affected much more by the fertilization from the increasing CO2 concentration than by a THC shutdown. In addition, the level of warming in the 22nd to 24th century favours crop production in northern Europe a lot, no matter whether the THC shuts down or not. CO2 emissions corridors aimed at limiting the risk of a THC breakdown to 10% or less are narrow, requiring departure from business-as-usual in the next few decades. The uncertainty about THC risks is still high. This is seen in model analyses as well as in the experts’ views that were elicited. The overview of results presented here is the outcome of the Integrated Assessment project INTEGRATION.
Resumo:
Carbonate rocks are important hydrocarbon reservoir rocks with complex textures and petrophysical properties (porosity and permeability) mainly resulting from various diagenetic processes (compaction, dissolution, precipitation, cementation, etc.). These complexities make prediction of reservoir characteristics (e.g. porosity and permeability) from their seismic properties very difficult. To explore the relationship between the seismic, petrophysical and geological properties, ultrasonic compressional- and shear-wave velocity measurements were made under a simulated in situ condition of pressure (50 MPa hydrostatic effective pressure) at frequencies of approximately 0.85 MHz and 0.7 MHz, respectively, using a pulse-echo method. The measurements were made both in vacuum-dry and fully saturated conditions in oolitic limestones of the Great Oolite Formation of southern England. Some of the rocks were fully saturated with oil. The acoustic measurements were supplemented by porosity and permeability measurements, petrological and pore geometry studies of resin-impregnated polished thin sections, X-ray diffraction analyses and scanning electron microscope studies to investigate submicroscopic textures and micropores. It is shown that the compressional- and shear-wave velocities (V-p and V-s, respectively) decrease with increasing porosity and that V-p decreases approximately twice as fast as V-s. The systematic differences in pore structures (e.g. the aspect ratio) of the limestones produce large residuals in the velocity versus porosity relationship. It is demonstrated that the velocity versus porosity relationship can be improved by removing the pore-structure-dependent variations from the residuals. The introduction of water into the pore space decreases the shear moduli of the rocks by about 2 GPa, suggesting that there exists a fluid/matrix interaction at grain contacts, which reduces the rigidity. The predicted Biot-Gassmann velocity values are greater than the measured velocity values due to the rock-fluid interaction. This is not accounted for in the Biot-Gassmann velocity models and velocity dispersion due to a local flow mechanism. The velocities predicted by the Raymer and time-average relationships overestimated the measured velocities even more than the Biot model.
Resumo:
For the very large nonlinear dynamical systems that arise in a wide range of physical, biological and environmental problems, the data needed to initialize a numerical forecasting model are seldom available. To generate accurate estimates of the expected states of the system, both current and future, the technique of ‘data assimilation’ is used to combine the numerical model predictions with observations of the system measured over time. Assimilation of data is an inverse problem that for very large-scale systems is generally ill-posed. In four-dimensional variational assimilation schemes, the dynamical model equations provide constraints that act to spread information into data sparse regions, enabling the state of the system to be reconstructed accurately. The mechanism for this is not well understood. Singular value decomposition techniques are applied here to the observability matrix of the system in order to analyse the critical features in this process. Simplified models are used to demonstrate how information is propagated from observed regions into unobserved areas. The impact of the size of the observational noise and the temporal position of the observations is examined. The best signal-to-noise ratio needed to extract the most information from the observations is estimated using Tikhonov regularization theory. Copyright © 2005 John Wiley & Sons, Ltd.
Resumo:
Fixed transactions costs that prohibit exchange engender bias in supply analysis due to censoring of the sample observations. The associated bias in conventional regression procedures applied to censored data and the construction of robust methods for mitigating bias have been preoccupations of applied economists since Tobin [Econometrica 26 (1958) 24]. This literature assumes that the true point of censoring in the data is zero and, when this is not the case, imparts a bias to parameter estimates of the censored regression model. We conjecture that this bias can be significant; affirm this from experiments; and suggest techniques for mitigating this bias using Bayesian procedures. The bias-mitigating procedures are based on modifications of the key step that facilitates Bayesian estimation of the censored regression model; are easy to implement; work well in both small and large samples; and lead to significantly improved inference in the censored regression model. These findings are important in light of the widespread use of the zero-censored Tobit regression and we investigate their consequences using data on milk-market participation in the Ethiopian highlands. (C) 2004 Elsevier B.V. All rights reserved.