991 resultados para Actor. Receiver. Reception. Presence. Representation
Resumo:
Murine cytolytic T cell lines have been analyzed for the expression of two surface glycoproteins called T145 and T130. T145, known to be expressed by activated cytolytic T cells, is also expressed by such lines, but T130, which has been described by a universal T cell marker, is not. Our results suggest a structural relationship between T145 and T130. Vicia villosa lectin, which binds selectively to T145 of activated T cells and which is cytotoxic for cytolytic T cell lines, has been used to select lectin-resistant mutants from these lines. Five independent lectin-resistant mutants have been obtained. All of them are cytolytically active, bind up to 100-fold less lectin than the parental lines, but still express T145 or a closely related glycoprotein.
Resumo:
BACKGROUND: Few studies have evaluated the influence of colectomy on antineutrophil cytoplasmic antibody (ANCA) positivity in ulcerative colitis (UC). In small series of patients it has been suggested that ANCA positivity in UC might be predictive for development of pouchitis after colectomy. AIMS: To assess the prevalence of ANCA in UC patients treated by colectomy and a Brooke's ileostomy (UC-BI) or ileal pouch anal anastomosis (UC-IPAA), and the relation between the presence of ANCA, the type of surgery, and the presence of pouchitis. SUBJECTS: 63 UC patients treated by colectomy (32 with UC-BI and 31 with UC-IPAA), 54 UC, and 24 controls. METHODS: Samples were obtained at least two years after colectomy. ANCA were detected by indirect immunofluorescent assay. RESULTS: There were no differences between patients with (36.3%) or without pouchitis (35.0%) and between patients with UC (55%), UC-BI (40.6%), and UC-IPAA (35.4%). However, ANCA prevalence significantly decreases in the whole group of operated patients (38.0%) compared with non-operated UC (p = 0.044). CONCLUSIONS: The prevalence of ANCA in operated patients was significantly lower than in non-operated UC, suggesting that it might be related either to the presence of inflamed or diseased tissue. ANCA persistence is not related to the surgical procedure and it should not be used as a marker for predicting the development of pouchitis.
Resumo:
Empirical studies have shown little evidence to support the presence of all unit roots present in the $^{\Delta_4}$ filter in quarterly seasonal time series. This paper analyses the performance of the Hylleberg, Engle, Granger and Yoo (1990) (HEGY) procedure when the roots under the null are not all present. We exploit the Vector of Quarters representation and cointegration relationship between the quarters when factors $(1-L),(1+L),\bigg(1+L^2\bigg),\bigg(1-L^2\bigg) y \bigg(1+L+L^2+L^3\bigg)$ are a source of nonstationarity in a process in order to obtain the distribution of tests of the HEGY procedure when the underlying processes have a root at the zero, Nyquist frequency, two complex conjugates of frequency $^{\pi/2}$ and two combinations of the previous cases. We show both theoretically and through a Monte-Carlo analysis that the t-ratios $^{t_{{\hat\pi}_1}}$ and $^{t_{{\hat\pi}_2}}$ and the F-type tests used in the HEGY procedure have the same distribution as under the null of a seasonal random walk when the root(s) is/are present, although this is not the case for the t-ratio tests associated with unit roots at frequency $^{\pi/2}$.
Resumo:
Empirical studies have shown little evidence to support the presence of all unit roots present in the $^{\Delta_4}$ filter in quarterly seasonal time series. This paper analyses the performance of the Hylleberg, Engle, Granger and Yoo (1990) (HEGY) procedure when the roots under the null are not all present. We exploit the Vector of Quarters representation and cointegration relationship between the quarters when factors $(1-L),(1+L),\bigg(1+L^2\bigg),\bigg(1-L^2\bigg) y \bigg(1+L+L^2+L^3\bigg)$ are a source of nonstationarity in a process in order to obtain the distribution of tests of the HEGY procedure when the underlying processes have a root at the zero, Nyquist frequency, two complex conjugates of frequency $^{\pi/2}$ and two combinations of the previous cases. We show both theoretically and through a Monte-Carlo analysis that the t-ratios $^{t_{{\hat\pi}_1}}$ and $^{t_{{\hat\pi}_2}}$ and the F-type tests used in the HEGY procedure have the same distribution as under the null of a seasonal random walk when the root(s) is/are present, although this is not the case for the t-ratio tests associated with unit roots at frequency $^{\pi/2}$.
Resumo:
Soil science has sought to develop better techniques for the classification of soils, one of which is the use of remote sensing applications. The use of ground sensors to obtain soil spectral data has enabled the characterization of these data and the advancement of techniques for the quantification of soil attributes. In order to do this, the creation of a soil spectral library is necessary. A spectral library should be representative of the variability of the soils in a region. The objective of this study was to create a spectral library of distinct soils from several agricultural regions of Brazil. Spectral data were collected (using a Fieldspec sensor, 350-2,500 nm) for the horizons of 223 soil profiles from the regions of Matão, Paraguaçu Paulista, Andradina, Ipaussu, Mirandópolis, Piracicaba, São Carlos, Araraquara, Guararapes, Valparaíso (SP); Naviraí, Maracajú, Rio Brilhante, Três Lagoas (MS); Goianésia (GO); and Uberaba and Lagoa da Prata (MG). A Principal Component Analysis (PCA) of the data was then performed and a graphic representation of the spectral curve was created for each profile. The reflectance intensity of the curves was principally influenced by the levels of Fe2O3, clay, organic matter and the presence of opaque minerals. There was no change in the spectral curves in the horizons of the Latossolos, Nitossolos, and Neossolos Quartzarênicos. Argissolos had superficial horizon curves with the greatest intensity of reflection above 2,200 nm. Cambissolos and Neossolos Litólicos had curves with greater reflectance intensity in poorly developed horizons. Gleisols showed a convex curve in the region of 350-400 nm. The PCA was able to separate different data collection areas according to the region of source material. Principal component one (PC1) was correlated with the intensity of reflectance samples and PC2 with the slope between the visible and infrared samples. The use of the Spectral Library as an indicator of possible soil classes proved to be an important tool in profile classification.
Resumo:
BACKGROUND: Qualitative frameworks, especially those based on the logical discrete formalism, are increasingly used to model regulatory and signalling networks. A major advantage of these frameworks is that they do not require precise quantitative data, and that they are well-suited for studies of large networks. While numerous groups have developed specific computational tools that provide original methods to analyse qualitative models, a standard format to exchange qualitative models has been missing. RESULTS: We present the Systems Biology Markup Language (SBML) Qualitative Models Package ("qual"), an extension of the SBML Level 3 standard designed for computer representation of qualitative models of biological networks. We demonstrate the interoperability of models via SBML qual through the analysis of a specific signalling network by three independent software tools. Furthermore, the collective effort to define the SBML qual format paved the way for the development of LogicalModel, an open-source model library, which will facilitate the adoption of the format as well as the collaborative development of algorithms to analyse qualitative models. CONCLUSIONS: SBML qual allows the exchange of qualitative models among a number of complementary software tools. SBML qual has the potential to promote collaborative work on the development of novel computational approaches, as well as on the specification and the analysis of comprehensive qualitative models of regulatory and signalling networks.
Resumo:
Front and domain growth of a binary mixture in the presence of a gravitational field is studied. The interplay of bulk- and surface-diffusion mechanisms is analyzed. An equation for the evolution of interfaces is derived from a time-dependent Ginzburg-Landau equation with a concentration-dependent diffusion coefficient. Scaling arguments on this equation give the exponents of a power-law growth. Numerical integrations of the Ginzburg-Landau equation corroborate the theoretical analysis.
Resumo:
1. Identifying the boundary of a species' niche from observational and environmental data is a common problem in ecology and conservation biology and a variety of techniques have been developed or applied to model niches and predict distributions. Here, we examine the performance of some pattern-recognition methods as ecological niche models (ENMs). Particularly, one-class pattern recognition is a flexible and seldom used methodology for modelling ecological niches and distributions from presence-only data. The development of one-class methods that perform comparably to two-class methods (for presence/absence data) would remove modelling decisions about sampling pseudo-absences or background data points when absence points are unavailable. 2. We studied nine methods for one-class classification and seven methods for two-class classification (five common to both), all primarily used in pattern recognition and therefore not common in species distribution and ecological niche modelling, across a set of 106 mountain plant species for which presence-absence data was available. We assessed accuracy using standard metrics and compared trade-offs in omission and commission errors between classification groups as well as effects of prevalence and spatial autocorrelation on accuracy. 3. One-class models fit to presence-only data were comparable to two-class models fit to presence-absence data when performance was evaluated with a measure weighting omission and commission errors equally. One-class models were superior for reducing omission errors (i.e. yielding higher sensitivity), and two-classes models were superior for reducing commission errors (i.e. yielding higher specificity). For these methods, spatial autocorrelation was only influential when prevalence was low. 4. These results differ from previous efforts to evaluate alternative modelling approaches to build ENM and are particularly noteworthy because data are from exhaustively sampled populations minimizing false absence records. Accurate, transferable models of species' ecological niches and distributions are needed to advance ecological research and are crucial for effective environmental planning and conservation; the pattern-recognition approaches studied here show good potential for future modelling studies. This study also provides an introduction to promising methods for ecological modelling inherited from the pattern-recognition discipline.
Resumo:
The phenomenon of anomalous fluctuations associated with the decay of an unstable state is analyzed in the presence of multiplicative noise. A theory is presented and compared with a numerical simulation. Our results allow us to distinguish the roles of additive and multiplicative noise in the nonlinear relaxation process. We suggest the use of experiments on transient dynamics to understand the effect of these two sources of noise in problems in which parametric noise is thought to be important, such as dye lasers.
Resumo:
The general theory of nonlinear relaxation times is developed for the case of Gaussian colored noise. General expressions are obtained and applied to the study of the characteristic decay time of unstable states in different situations, including white and colored noise, with emphasis on the distributed initial conditions. Universal effects of the coupling between colored noise and random initial conditions are predicted.
Resumo:
The decay of an unstable state under the influence of external colored noise has been studied by means of analog experiments and digital simulations. For both fixed and random initial conditions, the time evolution of the second moment ¿x2(t)¿ of the system variable was determined and then used to evaluate the nonlinear relaxation time. The results obtained are found to be in excellent agreement with the theoretical predictions of the immediately preceding paper [Casademunt, Jiménez-Aquino, and Sancho, Phys. Rev. A 40, 5905 (1989)].
Resumo:
We calculate the production of two b-quark pairs in hadron collisions. Sources of multiple pairs are multiple interactions and higher order perturbative QCD mechanisms. We subsequently investigate the competing effects of multiple b-pair production on measurements of CP violation: (i) the increase in event rate with multiple b-pair cross sections which may reach values of the order of 1 b in the presence of multiple interactions and (ii) the dilution of b versus b tagging efficiency because of the presence of events with four B mesons. The impact of multiple B-meson production is small unless the cross section for producing a single pair exceeds 1 mb. We show that even for larger values of the cross section the competing effects (i) and (ii) roughly compensate so that there is no loss in the precision with which CP-violating CKM angles can be determined.
Resumo:
Executive Summary The unifying theme of this thesis is the pursuit of a satisfactory ways to quantify the riskureward trade-off in financial economics. First in the context of a general asset pricing model, then across models and finally across country borders. The guiding principle in that pursuit was to seek innovative solutions by combining ideas from different fields in economics and broad scientific research. For example, in the first part of this thesis we sought a fruitful application of strong existence results in utility theory to topics in asset pricing. In the second part we implement an idea from the field of fuzzy set theory to the optimal portfolio selection problem, while the third part of this thesis is to the best of our knowledge, the first empirical application of some general results in asset pricing in incomplete markets to the important topic of measurement of financial integration. While the first two parts of this thesis effectively combine well-known ways to quantify the risk-reward trade-offs the third one can be viewed as an empirical verification of the usefulness of the so-called "good deal bounds" theory in designing risk-sensitive pricing bounds. Chapter 1 develops a discrete-time asset pricing model, based on a novel ordinally equivalent representation of recursive utility. To the best of our knowledge, we are the first to use a member of a novel class of recursive utility generators to construct a representative agent model to address some long-lasting issues in asset pricing. Applying strong representation results allows us to show that the model features countercyclical risk premia, for both consumption and financial risk, together with low and procyclical risk free rate. As the recursive utility used nests as a special case the well-known time-state separable utility, all results nest the corresponding ones from the standard model and thus shed light on its well-known shortcomings. The empirical investigation to support these theoretical results, however, showed that as long as one resorts to econometric methods based on approximating conditional moments with unconditional ones, it is not possible to distinguish the model we propose from the standard one. Chapter 2 is a join work with Sergei Sontchik. There we provide theoretical and empirical motivation for aggregation of performance measures. The main idea is that as it makes sense to apply several performance measures ex-post, it also makes sense to base optimal portfolio selection on ex-ante maximization of as many possible performance measures as desired. We thus offer a concrete algorithm for optimal portfolio selection via ex-ante optimization over different horizons of several risk-return trade-offs simultaneously. An empirical application of that algorithm, using seven popular performance measures, suggests that realized returns feature better distributional characteristics relative to those of realized returns from portfolio strategies optimal with respect to single performance measures. When comparing the distributions of realized returns we used two partial risk-reward orderings first and second order stochastic dominance. We first used the Kolmogorov Smirnov test to determine if the two distributions are indeed different, which combined with a visual inspection allowed us to demonstrate that the way we propose to aggregate performance measures leads to portfolio realized returns that first order stochastically dominate the ones that result from optimization only with respect to, for example, Treynor ratio and Jensen's alpha. We checked for second order stochastic dominance via point wise comparison of the so-called absolute Lorenz curve, or the sequence of expected shortfalls for a range of quantiles. As soon as the plot of the absolute Lorenz curve for the aggregated performance measures was above the one corresponding to each individual measure, we were tempted to conclude that the algorithm we propose leads to portfolio returns distribution that second order stochastically dominates virtually all performance measures considered. Chapter 3 proposes a measure of financial integration, based on recent advances in asset pricing in incomplete markets. Given a base market (a set of traded assets) and an index of another market, we propose to measure financial integration through time by the size of the spread between the pricing bounds of the market index, relative to the base market. The bigger the spread around country index A, viewed from market B, the less integrated markets A and B are. We investigate the presence of structural breaks in the size of the spread for EMU member country indices before and after the introduction of the Euro. We find evidence that both the level and the volatility of our financial integration measure increased after the introduction of the Euro. That counterintuitive result suggests the presence of an inherent weakness in the attempt to measure financial integration independently of economic fundamentals. Nevertheless, the results about the bounds on the risk free rate appear plausible from the view point of existing economic theory about the impact of integration on interest rates.
Resumo:
To translate the Kinder- und Hausmärchen into French is to confront the spectre of Charles Perrault and his Histoires ou contes du temps passé. Avec des moralités, which have haunted the fairy-tale genre in France since the end of the 17th century. Celebrated for their alleged simplicity and naivety by literary critics and folklorists, Perrault's "contes" have become the paragon of a genre against which fairytales translated into French are implicitly? measured. On the one hand, Perrault has come to play an integrating role, linking foreign texts to the French literary heritage and thereby facilitating their reception. On the other hand, he is simultaneously used as a contrast, to emphasise the originality of foreign authors and emphasise cultural differences. Drawing on contemporary and 19th century examples emphasising the influence of the Histoires ou contes du temps passé on French translations of the KHM, I will show that the Grimms' fairy-tales are translated less in the "tongue of Molière" than in the "tongue of Perrault".