963 resultados para Picard iteration
Resumo:
Background Interferon ? receptor 1 (IFN? R1) deficiency is a primary immunodeficiency with allelic dominant and recessive mutations characterised clinically by severe infections with mycobacteria. We aimed to compare the clinical features of recessive and dominant IFN?R1 deficiencies. Methods We obtained data from a large cohort of patients worldwide. We assessed these people by medical histories, records, and genetic and immunological studies. Data were abstracted onto a standard form. Findings We identified 22 patients with recessive complete IFN?R1 deficiency and 38 with dominant partial deficiency. BCG and environmental mycobacteria were the most frequent pathogens. In recessive patients, 17 (77%) had environmental mycobacterial disease and all nine BCG-vaccinated patients had BCG disease. In dominant patients, 30 (79%) had environmental mycobacterial disease and 11 (73%) of 15 BCG-vaccinated patients had BCG disease. Compared with dominant patients, those with recessive deficiency were younger at onset of first environmental mycobacterial disease (mean 3·1 years [SD 2·5] vs 13·4 years [14·3], p=0·001), had more mycobacterial disease episodes (19 vs 8 per 100 person-years of observation, p=0·0001), had more severe mycobacterial disease (mean number of organs infected by Mycobacterium avium complex 4·1 [SD 0·8] vs 2·0 [1·1], p=0·004), had shorter mean disease-free intervals (1·6 years [SD 1·4] vs 7·2 years [7·6], p
Resumo:
A comienzos del siglo XX, Detroit era una ciudad dinámica en pleno desarrollo. Pronto se convirtió en la cuarta ciudad de Estados Unidos, la capital de la naciente industria automovilística. El crecimiento se prolongó hasta finales de los años 50, cuando, a pesar del auge económico de Estados Unidos y de su área metropolitana, Detroit comenzó a mostrar los primeros signos de estancamiento. La crisis se ha prolongado hasta hoy, cuando Detroit constituye el paradigma de la ciudad industrial en declive. Estas dos imágenes contrapuestas, el auge y la crisis, no parecen explicar por sí mismas las causas de la intensidad y persistencia del declive de Detroit. Analizar las interacciones entre crecimiento económico, políticas públicas locales y desarrollo urbano a lo largo del tiempo permitirá subrayar las continuidades y comprender en qué medida el declive de Detroit ancla sus raíces en el modelo planteado durante la etapa de auge.
Resumo:
Computionally efficient sequential learning algorithms are developed for direct-link resource-allocating networks (DRANs). These are achieved by decomposing existing recursive training algorithms on a layer by layer and neuron by neuron basis. This allows network weights to be updated in an efficient parallel manner and facilitates the implementation of minimal update extensions that yield a significant reduction in computation load per iteration compared to existing sequential learning methods employed in resource-allocation network (RAN) and minimal RAN (MRAN) approaches. The new algorithms, which also incorporate a pruning strategy to control network growth, are evaluated on three different system identification benchmark problems and shown to outperform existing methods both in terms of training error convergence and computational efficiency. (c) 2005 Elsevier B.V. All rights reserved.
Resumo:
In this paper, a reduced-complexity soft-interference-cancellation minimum mean-square-error.(SIC-MMSE) iterative equalization method for severe time-dispersive multiple-input-multiple-output (MIMO) channels is proposed. To mitigate the severe time dispersiveness of the channel, a single carrier with cyclic prefix is employed, and the equalization is per-formed in the frequency domain. This simplifies the challenging problem of equalization in MIMO channels due to both the intersymbol interference (ISI) and the coantenna interference (CAI). The proposed iterative algorithm works in two stages. The first stage estimates the transmitted frequency-domain symbols using a low-complexity SIC-MMSE equalizer. The second stage converts the estimated frequency-domain symbols in the time domain and finds their means and variances to incorporate in the SIC-MMSE equalizer in the next iteration. Simulation results show the bit-/symbol-error-rate performance of the SIC-MMSE equalizer, with and without coding, for various modulation schemes.
Resumo:
A family of stochastic gradient algorithms and their behaviour in the data echo cancellation work platform are presented. The cost function adaptation algorithms use an error exponent update strategy based on an absolute error mapping, which is updated at every iteration. The quadratic and nonquadratic cost functions are special cases of the new family. Several possible realisations are introduced using these approaches. The noisy error problem is discussed and the digital recursive filter estimator is proposed. The simulation outcomes confirm the effectiveness of the proposed family of algorithms.
Resumo:
The goal of this study is to identify cues for the cognitive process of attention in ancient Greek art, aiming to find confirmation of its possible use by ancient Greek audiences and artists. Evidence of cues that trigger attention’s psychological dispositions was searched through content analysis of image reproductions of ancient Greek sculpture and fine vase painting from the archaic to the Hellenistic period - ca. 7th -1st cent. BC. Through this analysis, it was possible to observe the presence of cues that trigger orientation to the work of art (i.e. amplification, contrast, emotional salience, simplification, symmetry), of a cue that triggers a disseminate attention to the parts of the work (i.e. distribution of elements) and of cues that activate selective attention to specific elements in the work of art (i.e. contrast of elements, salient color, central positioning of elements, composition regarding the flow of elements and significant objects). Results support the universality of those dispositions, probably connected with basic competencies that are hard-wired in the nervous system and in the cognitive processes.
Resumo:
A 2D isothermal finite element simulation of the injection stretch-blow molding (ISBM) process for polyethylene terephthalate (PET) containers has been developed through the commercial finite element package ABAQUS/standard. In this work, the blowing air to inflate the PET preform was modeled through two different approaches: a direct pressure input (as measured in the blowing machine) and a constant mass flow rate input (based on a pressure-volume-time relationship). The results from these two approaches were validated against free blow and free stretch-blow experiments, which were instrumented and monitored through high-speed video. Results show that simulation using a constant mass flow rate approach gave a better prediction of volume vs. time curve and preform shape evolution when compared with the direct pressure approach and hence is more appropriate in modeling the preblowing stage in the injection stretch-blow molding process
Resumo:
This article reports results of an experiment designed to analyze the link between risky decisions made by couples and risky decisions made separately by each spouse. We estimate both the spouses and the couples' degrees of risk aversion, we assess how the risk preferences of the two spouses aggregate when they make risky decisions, and we shed light on the dynamics of the decision process that takes place when couples make risky decisions. We find that, far from being fixed, the balance of power within the household is malleable. In most couples, men have, initially, more decision-making power than women but women who ultimately implement the joint decisions gain more and more power over the course of decision making.
Resumo:
The arc-length method has become a widely established solution technique for studying nonlinear structural behavior. By augmenting the set of nonlinear equilibrium equations with a constraint equation, which is a function of both the displacements and load increment, it is capable of traversing limit points. Numerous investigations have shown that highly nonlinear behavior such as sharp "snap-backs" can still lead to numerical difficulties. Two practical examples are presented to assess the effectiveness of this solution technique in capturing secondary instabilities in postbuckling structures, which present themselves as abrupt mode jumps. Although the first example poses no special difficulties, in the second case the nonlinear procedure fails to converge. An improvement to the method's formulation is suggested, which accounts for the residual forces that are usually neglected, when proceeding to the next increment once convergence is reached on the current increment. The choice of a correct load increment at the first iteration, within a predictor-corrector scheme, is central to the method's effectiveness. Current strategies for a choice of this load increment are discussed and are shown to be no longer consistent with the modified formulation; therefore, a new approach is proposed.
Resumo:
In a Bayesian learning setting, the posterior distribution of a predictive model arises from a trade-off between its prior distribution and the conditional likelihood of observed data. Such distribution functions usually rely on additional hyperparameters which need to be tuned in order to achieve optimum predictive performance; this operation can be efficiently performed in an Empirical Bayes fashion by maximizing the posterior marginal likelihood of the observed data. Since the score function of this optimization problem is in general characterized by the presence of local optima, it is necessary to resort to global optimization strategies, which require a large number of function evaluations. Given that the evaluation is usually computationally intensive and badly scaled with respect to the dataset size, the maximum number of observations that can be treated simultaneously is quite limited. In this paper, we consider the case of hyperparameter tuning in Gaussian process regression. A straightforward implementation of the posterior log-likelihood for this model requires O(N^3) operations for every iteration of the optimization procedure, where N is the number of examples in the input dataset. We derive a novel set of identities that allow, after an initial overhead of O(N^3), the evaluation of the score function, as well as the Jacobian and Hessian matrices, in O(N) operations. We prove how the proposed identities, that follow from the eigendecomposition of the kernel matrix, yield a reduction of several orders of magnitude in the computation time for the hyperparameter optimization problem. Notably, the proposed solution provides computational advantages even with respect to state of the art approximations that rely on sparse kernel matrices.