924 resultados para Convergence insufficiency
Resumo:
Uteroplacental vascular insufficiency in humans is a common cause of intrauterine growth restriction (IUGR) and is associated with an increased incidence of perinatal asphyxia and neurodevelopmental disorders compared to normal weight newborns. Experimental models that provide an opportunity to analyze the pathogenesis of these relationships are limited. Here, we used neonatal pigs from large litters in which there were piglets of normal birth weight (for controls) and of low birth weight (for uteroplacental vascular insufficiency). Hypoxia was induced in paired littermates by reducing the fraction of inspired oxygen to 4% for 25 min. Brain tissue was collected 4 h post-hypoxia. Cerebral levels of apoptosis were quantified morphologically and verified with caspase-3 activity and TUNEL. Expression of Bcl-2, BcI-XL and Bax proteins was investigated using immunohistochemistry. Cellular positivity for Bcl-2 was consistently higher in the non-apoptotic white matter of the hypoxic IUGR animals compared with their littermates and reached significance at P < 0.05 in several pairs of littermates. Alterations in Bax showed a trend towards higher expression in the hypoxic IUGR littermates but rarely reached significance. The IUGR piglets showed a significantly greater amount of apoptosis in response to the hypoxia than the normal weight piglets, suggesting an increased vulnerability to apoptosis in the IUGR piglets. (c) 2006 Elsevier B.V. All rights reserved.
Resumo:
We revisit the one-unit gradient ICA algorithm derived from the kurtosis function. By carefully studying properties of the stationary points of the discrete-time one-unit gradient ICA algorithm, with suitable condition on the learning rate, convergence can be proved. The condition on the learning rate helps alleviate the guesswork that accompanies the problem of choosing suitable learning rate in practical computation. These results may be useful to extract independent source signals on-line.
Resumo:
Markov chain Monte Carlo (MCMC) is a methodology that is gaining widespread use in the phylogenetics community and is central to phylogenetic software packages such as MrBayes. An important issue for users of MCMC methods is how to select appropriate values for adjustable parameters such as the length of the Markov chain or chains, the sampling density, the proposal mechanism, and, if Metropolis-coupled MCMC is being used, the number of heated chains and their temperatures. Although some parameter settings have been examined in detail in the literature, others are frequently chosen with more regard to computational time or personal experience with other data sets. Such choices may lead to inadequate sampling of tree space or an inefficient use of computational resources. We performed a detailed study of convergence and mixing for 70 randomly selected, putatively orthologous protein sets with different sizes and taxonomic compositions. Replicated runs from multiple random starting points permit a more rigorous assessment of convergence, and we developed two novel statistics, delta and epsilon, for this purpose. Although likelihood values invariably stabilized quickly, adequate sampling of the posterior distribution of tree topologies took considerably longer. Our results suggest that multimodality is common for data sets with 30 or more taxa and that this results in slow convergence and mixing. However, we also found that the pragmatic approach of combining data from several short, replicated runs into a metachain to estimate bipartition posterior probabilities provided good approximations, and that such estimates were no worse in approximating a reference posterior distribution than those obtained using a single long run of the same length as the metachain. Precision appears to be best when heated Markov chains have low temperatures, whereas chains with high temperatures appear to sample trees with high posterior probabilities only rarely. [Bayesian phylogenetic inference; heating parameter; Markov chain Monte Carlo; replicated chains.]
Resumo:
Objective: To compare the total plasma cortisol values obtained from three widely used immunoassays and a high pressure liquid chromatography (HPLC) technique on samples obtained from patients with sepsis. Design and setting: Observational interventional in the general intensive care unit of a metropolitan hospital. Patients and participants: Patients admitted to the intensive care unit with a diagnosis of sepsis and fulfilling criteria of systemic inflammatory response syndrome. Interventions: Standard short synacthen test performed with 250 mu g cosyntropin. Measurements and results: Two of the three immunoassays returned results significantly higher than those obtained by HPLC: Immulite by 95% (95%CI 31-188%) and TDx by 79% (21-165%). The limits of agreement for all three immunoassays with HPLC ranged from -62% to 770%. In addition, by classifying the patients into responders and non-responders to ACTH by standard criteria there was concordance in all assays in only 44% of patients. Conclusions: Immunoassay estimation of total plasma cortisol in septic patients shows wide assay related variation that may have significant impact in the diagnosis of relative adrenal insufficiency.
Resumo:
This introduction considers reasons why public policies might be expected to converge between Britain and Germany, arguing that the inter-related forces of globalisation, Europeanisation, policy transfer (in various guises) and the election of centre-left governance in 1997 and 1998 could be expected to lead to such convergence. It then outlines important reasons why such convergence may not occur, due to the radically different institutional settings, as well as 'path dependence' and the resilience of established institutions all playing a role in continuing divergence in a number of important areas of public policy.
Resumo:
This paper shows that the Italian economy has two long-run equilibria, which are due to the different level of industrialization between the centre-north and the south of the country. These equilibria converge until 1971 but diverge afterwards; the end of the convergence process coincides with the slowing down of Italy's industrialization policy in the South. In this paper we argue that to address this problem effectively, an economic policy completely different from that in place in needed. However, such a policy is unlikely to be implemented given the scarcity of resources and the short run nature of the political cycle.
Resumo:
We present an implementation of the domain-theoretic Picard method for solving initial value problems (IVPs) introduced by Edalat and Pattinson [1]. Compared to Edalat and Pattinson's implementation, our algorithm uses a more efficient arithmetic based on an arbitrary precision floating-point library. Despite the additional overestimations due to floating-point rounding, we obtain a similar bound on the convergence rate of the produced approximations. Moreover, our convergence analysis is detailed enough to allow a static optimisation in the growth of the precision used in successive Picard iterations. Such optimisation greatly improves the efficiency of the solving process. Although a similar optimisation could be performed dynamically without our analysis, a static one gives us a significant advantage: we are able to predict the time it will take the solver to obtain an approximation of a certain (arbitrarily high) quality.
Resumo:
We assessed summation of contrast across eyes and area at detection threshold ( C t). Stimuli were sine-wave gratings (2.5 c/deg) spatially modulated by cosine- and anticosine-phase raised plaids (0.5 c/deg components oriented at ±45°). When presented dichoptically the signal regions were interdigitated across eyes but produced a smooth continuous grating following their linear binocular sum. The average summation ratio ( C t1/([ C t1+2]) for this stimulus pair was 1.64 (4.3 dB). This was only slightly less than the binocular summation found for the same patch type presented to both eyes, and the area summation found for the two different patch types presented to the same eye. We considered 192 model architectures containing each of the following four elements in all possible orders: (i) linear summation or a MAX operator across eyes, (ii) linear summation or a MAX operator across area, (iii) linear or accelerating contrast transduction, and (iv) additive Gaussian, stochastic noise. Formal equivalences reduced this to 62 different models. The most successful four-element model was: linear summation across eyes followed by nonlinear contrast transduction, linear summation across area, and late noise. Model performance was enhanced when additional nonlinearities were placed before binocular summation and after area summation. The implications for models of probability summation and uncertainty are discussed.