945 resultados para Ligante RANK
Resumo:
Position in the social hierarchy can influence brain dopamine function and cocaine reinforcement in nonhuman primates during early cocaine exposure. With prolonged exposure, however, initial differences in rates of cocaine self-administration between dominant and subordinate monkeys dissipate. The present studies used a choice procedure to assess the relative reinforcing strength of cocaine in group-housed male cynomolgus monkeys with extensive cocaine self-administration histories. Responding was maintained under a concurrent fixed-ratio 50 schedule of food and cocaine (0.003-0.1 mg/kg per injection) presentation. Responding on the cocaine-associated lever increased as a function of cocaine dose in all monkeys. Although response distribution was similar across social rank when saline or relatively low or high cocaine doses were the alternative to food, planned t tests indicated that cocaine choice was significantly greater in subordinate monkeys when choice was between an intermediate dose (0.01 mg/kg) and food. When a between-session progressive-ratio procedure was used to increase response requirements for the preferred reinforcer (either cocaine or food), choice of that reinforcer decreased in all monkeys. The average response requirement that produced a shift in response allocation from the cocaine-associated lever to the food-associated lever was higher in subordinates across cocaine doses, an effect that trended toward significance (p = 0.053). These data indicate that despite an extensive history of cocaine self-administration, most subordinate monkeys were more sensitive to the relative reinforcing strength of cocaine than dominant monkeys.
Resumo:
The performance of rank dependent preference functionals under risk is comprehensively evaluated using Bayesian model averaging. Model comparisons are made at three levels of heterogeneity plus three ways of linking deterministic and stochastic models: the differences in utilities, the differences in certainty equivalents and contextualutility. Overall, the"bestmodel", which is conditional on the form of heterogeneity is a form of Rank Dependent Utility or Prospect Theory that cap tures the majority of behaviour at both the representative agent and individual level. However, the curvature of the probability weighting function for many individuals is S-shaped, or ostensibly concave or convex rather than the inverse S-shape commonly employed. Also contextual utility is broadly supported across all levels of heterogeneity. Finally, the Priority Heuristic model, previously examined within a deterministic setting, is estimated within a stochastic framework, and allowing for endogenous thresholds does improve model performance although it does not compete well with the other specications considered.
Resumo:
In cooperative communication networks, owing to the nodes' arbitrary geographical locations and individual oscillators, the system is fundamentally asynchronous. Such a timing mismatch may cause rank deficiency of the conventional space-time codes and, thus, performance degradation. One efficient way to overcome such an issue is the delay-tolerant space-time codes (DT-STCs). The existing DT-STCs are designed assuming that the transmitter has no knowledge about the channels. In this paper, we show how the performance of DT-STCs can be improved by utilizing some feedback information. A general framework for designing DT-STC with limited feedback is first proposed, allowing for flexible system parameters such as the number of transmit/receive antennas, modulated symbols, and the length of codewords. Then, a new design method is proposed by combining Lloyd's algorithm and the stochastic gradient-descent algorithm to obtain optimal codebook of STCs, particularly for systems with linear minimum-mean-square-error receiver. Finally, simulation results confirm the performance of the newly designed DT-STCs with limited feedback.
Resumo:
In general, particle filters need large numbers of model runs in order to avoid filter degeneracy in high-dimensional systems. The recently proposed, fully nonlinear equivalent-weights particle filter overcomes this requirement by replacing the standard model transition density with two different proposal transition densities. The first proposal density is used to relax all particles towards the high-probability regions of state space as defined by the observations. The crucial second proposal density is then used to ensure that the majority of particles have equivalent weights at observation time. Here, the performance of the scheme in a high, 65 500 dimensional, simplified ocean model is explored. The success of the equivalent-weights particle filter in matching the true model state is shown using the mean of just 32 particles in twin experiments. It is of particular significance that this remains true even as the number and spatial variability of the observations are changed. The results from rank histograms are less easy to interpret and can be influenced considerably by the parameter values used. This article also explores the sensitivity of the performance of the scheme to the chosen parameter values and the effect of using different model error parameters in the truth compared with the ensemble model runs.
Resumo:
This paper investigates the use of a particle filter for data assimilation with a full scale coupled ocean–atmosphere general circulation model. Synthetic twin experiments are performed to assess the performance of the equivalent weights filter in such a high-dimensional system. Artificial 2-dimensional sea surface temperature fields are used as observational data every day. Results are presented for different values of the free parameters in the method. Measures of the performance of the filter are root mean square errors, trajectories of individual variables in the model and rank histograms. Filter degeneracy is not observed and the performance of the filter is shown to depend on the ability to keep maximum spread in the ensemble.
Resumo:
In common with many plants native to low P soils, jarrah (Eucalyptus marginata) develops toxicity symptoms upon exposure to elevated phosphorus (P). Jarrah plants can establish arbuscular mycorrhizal (AM) and ectomycorrhizal (ECM) associations, along with a non-colonizing symbiosis described recently. AM colonization is known to influence the pattern of expression of genes required for P uptake of host plants and our aim was to investigate this phenomenon in relation to P sensitivity. Therefore, we examined the effect on hosts of the presence of AM and ECM fungi in combination with toxic pulses of P and assessed possible correlations between the induced tolerance and the shoot P concentration. The P transport dynamics of AM (Rhizophagus irregularis and Scutellospora calospora), ECM (Scleroderma sp.), non-colonizing symbiosis (Austroboletus occidentalis), dual mycorrhizal (R. irregularis and Scleroderma sp.), and non-mycorrhizal (NM) seedlings were monitored following two pulses of P. The ECM and A. occidentalis associations significantly enhanced the shoot P content of jarrah plants growing under P-deficient conditions. In addition, S. calospora, A. occidentalis, and Scleroderma sp. all stimulated plant growth significantly. All inoculated plants had significantly lower phytotoxicity symptoms compared to NM controls 7 days after addition of an elevated P dose (30 mg P kg−1 soil). Following exposure to toxicity-inducing levels of P, the shoot P concentration was significantly lower in R. irregularis-inoculated and dually inoculated plants compared to NM controls. Although all inoculated plants had reduced toxicity symptoms and there was a positive linear relationship between rank and shoot P concentration, the protective effect was not necessarily explained by the type of fungal association or the extent of mycorrhizal colonization.
Resumo:
BACKGROUND: Social networks are common in digital health. A new stream of research is beginning to investigate the mechanisms of digital health social networks (DHSNs), how they are structured, how they function, and how their growth can be nurtured and managed. DHSNs increase in value when additional content is added, and the structure of networks may resemble the characteristics of power laws. Power laws are contrary to traditional Gaussian averages in that they demonstrate correlated phenomena. OBJECTIVES: The objective of this study is to investigate whether the distribution frequency in four DHSNs can be characterized as following a power law. A second objective is to describe the method used to determine the comparison. METHODS: Data from four DHSNs—Alcohol Help Center (AHC), Depression Center (DC), Panic Center (PC), and Stop Smoking Center (SSC)—were compared to power law distributions. To assist future researchers and managers, the 5-step methodology used to analyze and compare datasets is described. RESULTS: All four DHSNs were found to have right-skewed distributions, indicating the data were not normally distributed. When power trend lines were added to each frequency distribution, R(2) values indicated that, to a very high degree, the variance in post frequencies can be explained by actor rank (AHC .962, DC .975, PC .969, SSC .95). Spearman correlations provided further indication of the strength and statistical significance of the relationship (AHC .987. DC .967, PC .983, SSC .993, P<.001). CONCLUSIONS: This is the first study to investigate power distributions across multiple DHSNs, each addressing a unique condition. Results indicate that despite vast differences in theme, content, and length of existence, DHSNs follow properties of power laws. The structure of DHSNs is important as it gives insight to researchers and managers into the nature and mechanisms of network functionality. The 5-step process undertaken to compare actor contribution patterns can be replicated in networks that are managed by other organizations, and we conjecture that patterns observed in this study could be found in other DHSNs. Future research should analyze network growth over time and examine the characteristics and survival rates of superusers.
Resumo:
Virus capsids are primed for disassembly, yet capsid integrity is key to generating a protective immune response. Foot-and-mouth disease virus (FMDV) capsids comprise identical pentameric protein subunits held together by tenuous noncovalent interactions and are often unstable. Chemically inactivated or recombinant empty capsids, which could form the basis of future vaccines, are even less stable than live virus. Here we devised a computational method to assess the relative stability of protein-protein interfaces and used it to design improved candidate vaccines for two poorly stable, but globally important, serotypes of FMDV: O and SAT2. We used a restrained molecular dynamics strategy to rank mutations predicted to strengthen the pentamer interfaces and applied the results to produce stabilized capsids. Structural analyses and stability assays confirmed the predictions, and vaccinated animals generated improved neutralizing-antibody responses to stabilized particles compared to parental viruses and wild-type capsids.
Resumo:
Existing research on the legitimacy of the UN Security Council is conceptual or theoretical, for the most part, as scholars tend to make legitimacy assessments with reference to objective standards. Whether UN member states perceive the Security Council as legitimate or illegitimate has yet to be investigated systematically; nor do we know whether states care primarily about the Council's compliance with its legal mandate, its procedures, or its effectiveness. To address this gap, our article analyzes evaluative statements made by states in UN General Assembly debates on the Security Council, for the period 1991–2009. In making such statements, states confer legitimacy on the Council or withhold legitimacy from it. We conclude the following: First, the Security Council suffers from a legitimacy deficit because negative evaluations of the Council by UN member states far outweigh positive ones. Nevertheless, the Council does not find itself in an intractable legitimacy crisis because it still enjoys a rudimentary degree of legitimacy. Second, the Council's legitimacy deficit results primarily from states' concerns regarding the body's procedural shortcomings. Misgivings as regards shortcomings in performance rank second. Whether or not the Council complies with its legal mandate has failed to attract much attention at all.
Resumo:
Let H ∈ C 2(ℝ N×n ), H ≥ 0. The PDE system arises as the Euler-Lagrange PDE of vectorial variational problems for the functional E ∞(u, Ω) = ‖H(Du)‖ L ∞(Ω) defined on maps u: Ω ⊆ ℝ n → ℝ N . (1) first appeared in the author's recent work. The scalar case though has a long history initiated by Aronsson. Herein we study the solutions of (1) with emphasis on the case of n = 2 ≤ N with H the Euclidean norm on ℝ N×n , which we call the “∞-Laplacian”. By establishing a rigidity theorem for rank-one maps of independent interest, we analyse a phenomenon of separation of the solutions to phases with qualitatively different behaviour. As a corollary, we extend to N ≥ 2 the Aronsson-Evans-Yu theorem regarding non existence of zeros of |Du| and prove a maximum principle. We further characterise all H for which (1) is elliptic and also study the initial value problem for the ODE system arising for n = 1 but with H(·, u, u′) depending on all the arguments.
Resumo:
Periocular recognition has recently become an active topic in biometrics. Typically it uses 2D image data of the periocular region. This paper is the first description of combining 3D shape structure with 2D texture. A simple and effective technique using iterative closest point (ICP) was applied for 3D periocular region matching. It proved its strength for relatively unconstrained eye region capture, and does not require any training. Local binary patterns (LBP) were applied for 2D image based periocular matching. The two modalities were combined at the score-level. This approach was evaluated using the Bosphorus 3D face database, which contains large variations in facial expressions, head poses and occlusions. The rank-1 accuracy achieved from the 3D data (80%) was better than that for 2D (58%), and the best accuracy (83%) was achieved by fusing the two types of data. This suggests that significant improvements to periocular recognition systems could be achieved using the 3D structure information that is now available from small and inexpensive sensors.
Resumo:
This paper investigates the potential of fusion at normalisation/segmentation level prior to feature extraction. While there are several biometric fusion methods at data/feature level, score level and rank/decision level combining raw biometric signals, scores, or ranks/decisions, this type of fusion is still in its infancy. However, the increasing demand to allow for more relaxed and less invasive recording conditions, especially for on-the-move iris recognition, suggests to further investigate fusion at this very low level. This paper focuses on the approach of multi-segmentation fusion for iris biometric systems investigating the benefit of combining the segmentation result of multiple normalisation algorithms, using four methods from two different public iris toolkits (USIT, OSIRIS) on the public CASIA and IITD iris datasets. Evaluations based on recognition accuracy and ground truth segmentation data indicate high sensitivity with regards to the type of errors made by segmentation algorithms.
Resumo:
Preparing for episodes with risks of anomalous weather a month to a year ahead is an important challenge for governments, non-governmental organisations, and private companies and is dependent on the availability of reliable forecasts. The majority of operational seasonal forecasts are made using process-based dynamical models, which are complex, computationally challenging and prone to biases. Empirical forecast approaches built on statistical models to represent physical processes offer an alternative to dynamical systems and can provide either a benchmark for comparison or independent supplementary forecasts. Here, we present a simple empirical system based on multiple linear regression for producing probabilistic forecasts of seasonal surface air temperature and precipitation across the globe. The global CO2-equivalent concentration is taken as the primary predictor; subsequent predictors, including large-scale modes of variability in the climate system and local-scale information, are selected on the basis of their physical relationship with the predictand. The focus given to the climate change signal as a source of skill and the probabilistic nature of the forecasts produced constitute a novel approach to global empirical prediction. Hindcasts for the period 1961–2013 are validated against observations using deterministic (correlation of seasonal means) and probabilistic (continuous rank probability skill scores) metrics. Good skill is found in many regions, particularly for surface air temperature and most notably in much of Europe during the spring and summer seasons. For precipitation, skill is generally limited to regions with known El Niño–Southern Oscillation (ENSO) teleconnections. The system is used in a quasi-operational framework to generate empirical seasonal forecasts on a monthly basis.
Resumo:
The ring-shedding process in the Agulhas Current is studied using the ensemble Kalman filter to assimilate geosat altimeter data into a two-layer quasigeostrophic ocean model. The properties of the ensemble Kalman filter are further explored with focus on the analysis scheme and the use of gridded data. The Geosat data consist of 10 fields of gridded sea-surface height anomalies separated 10 days apart that are added to a climatic mean field. This corresponds to a huge number of data values, and a data reduction scheme must be applied to increase the efficiency of the analysis procedure. Further, it is illustrated how one can resolve the rank problem occurring when a too large dataset or a small ensemble is used.
Resumo:
The expression of protein kinase C (PKC) isoforms (PKC-alpha, PKC-beta 1, PKC-delta, PKC-epsilon, and PKC-zeta) was studied by immunoblotting in whole ventricles of rat hearts during postnatal development (1-26 days) and in the adult. PKC-alpha, PKC-beta 1, PKC-delta, PKC-epsilon, and PKC-zeta were detected in ventricles of 1-day-old rats, although PKC-alpha and PKC-beta 1 were only barely detectable. All isoforms were rapidly downregulated during development, with abundances relative to total protein declining in the adult to < 25% of 1-day-old values. PKC-beta 1 was not detectable in adult ventricles. The specific activity of PKC was also downregulated. The rat ventricular myocyte becomes amitotic soon after birth but continues to grow, increasing its protein content 40- to 50-fold between the neonate and the 300-g adult. An important question is thus whether the amount of PKC per myocyte is downregulated. With the use of isolated cells, immunoblotting showed that the contents per myocyte of PKC-alpha and PKC-epsilon increased approximately 10-fold between the neonatal and adult stages. In rat ventricles, the rank of association with the particulate fraction was PKC-delta > PKC-epsilon > PKC-zeta. Association of these isoforms with the particulate fraction was less in the adult than in the neonate. In primary cultures of ventricular myocytes prepared from neonatal rat hearts, 1 microM 12-O-tetradecanoylphorbol-13-acetate (TPA) elicited translocation of PKC-alpha, PKC-delta, and PKC-epsilon from the soluble to the particulate fraction in < 1 min, after which time no further translocation was observed. Prolonged exposure (16 h) of myocytes to 1 microM TPA caused essentially complete downregulation of these isoforms, although downregulation of PKC-epsilon was slower than for PKC-delta. In contrast, PKC-zeta was neither translocated nor downregulated by 1 microM TPA. Immunoblotting of human ventricular samples also revealed downregulation of PKC relative to total protein during fetal/postnatal development.