965 resultados para 2nd-order perturbation-theory
Resumo:
Denote the set of 21 non-isomorphic cubic graphs of order 10 by L. We first determine precisely which L is an element of L occur as the leave of a partial Steiner triple system, thus settling the existence problem for partial Steiner triple systems of order 10 with cubic leaves. Then we settle the embedding problem for partial Steiner triple systems with leaves L is an element of L. This second result is obtained as a corollary of a more general result which gives, for each integer v greater than or equal to 10 and each L is an element of L, necessary and sufficient conditions for the existence of a partial Steiner triple system of order v with leave consisting of the complement of L and v - 10 isolated vertices. (C) 2004 Elsevier B.V. All rights reserved.
Resumo:
Mineral processing plants use two main processes; these are comminution and separation. The objective of the comminution process is to break complex particles consisting of numerous minerals into smaller simpler particles where individual particles consist primarily of only one mineral. The process in which the mineral composition distribution in particles changes due to breakage is called 'liberation'. The purpose of separation is to separate particles consisting of valuable mineral from those containing nonvaluable mineral. The energy required to break particles to fine sizes is expensive, and therefore the mineral processing engineer must design the circuit so that the breakage of liberated particles is reduced in favour of breaking composite particles. In order to effectively optimize a circuit through simulation it is necessary to predict how the mineral composition distributions change due to comminution. Such a model is called a 'liberation model for comminution'. It was generally considered that such a model should incorporate information about the ore, such as the texture. However, the relationship between the feed and product particles can be estimated using a probability method, with the probability being defined as the probability that a feed particle of a particular composition and size will form a particular product particle of a particular size and composition. The model is based on maximizing the entropy of the probability subject to mass constraints and composition constraint. Not only does this methodology allow a liberation model to be developed for binary particles, but also for particles consisting of many minerals. Results from applying the model to real plant ore are presented. A laboratory ball mill was used to break particles. The results from this experiment were used to estimate the kernel which represents the relationship between parent and progeny particles. A second feed, consisting primarily of heavy particles subsampled from the main ore was then ground through the same mill. The results from the first experiment were used to predict the product of the second experiment. The agreement between the predicted results and the actual results are very good. It is therefore recommended that more extensive validation is needed to fully evaluate the substance of the method. (C) 2003 Elsevier Ltd. All rights reserved.
Resumo:
The numerical solution of stochastic differential equations (SDEs) has been focussed recently on the development of numerical methods with good stability and order properties. These numerical implementations have been made with fixed stepsize, but there are many situations when a fixed stepsize is not appropriate. In the numerical solution of ordinary differential equations, much work has been carried out on developing robust implementation techniques using variable stepsize. It has been necessary, in the deterministic case, to consider the best choice for an initial stepsize, as well as developing effective strategies for stepsize control-the same, of course, must be carried out in the stochastic case. In this paper, proportional integral (PI) control is applied to a variable stepsize implementation of an embedded pair of stochastic Runge-Kutta methods used to obtain numerical solutions of nonstiff SDEs. For stiff SDEs, the embedded pair of the balanced Milstein and balanced implicit method is implemented in variable stepsize mode using a predictive controller for the stepsize change. The extension of these stepsize controllers from a digital filter theory point of view via PI with derivative (PID) control will also be implemented. The implementations show the improvement in efficiency that can be attained when using these control theory approaches compared with the regular stepsize change strategy. (C) 2004 Elsevier B.V. All rights reserved.
Resumo:
We present a fully quantum mechanical treatment of the nondegenerate optical parametric oscillator both below and near threshold. This is a nonequilibrium quantum system with a critical point phase transition, that is also known to exhibit strong yet easily observed squeezing and quantum entanglement. Our treatment makes use of the positive P representation and goes beyond the usual linearized theory. We compare our analytical results with numerical simulations and find excellent agreement. We also carry out a detailed comparison of our results with those obtained from stochastic electrodynamics, a theory obtained by truncating the equation of motion for the Wigner function, with a view to locating regions of agreement and disagreement between the two. We calculate commonly used measures of quantum behavior including entanglement, squeezing, and Einstein-Podolsky-Rosen (EPR) correlations as well as higher order tripartite correlations, and show how these are modified as the critical point is approached. These results are compared with those obtained using two degenerate parametric oscillators, and we find that in the near-critical region the nondegenerate oscillator has stronger EPR correlations. In general, the critical fluctuations represent an ultimate limit to the possible entanglement that can be achieved in a nondegenerate parametric oscillator.
Theory-of-mind development in oral deaf children with cochlear implants or conventional hearing aids
Resumo:
Background: In the context of the established finding that theory-of-mind (ToM) growth is seriously delayed in late-signing deaf children, and some evidence of equivalent delays in those learning speech with conventional hearing aids, this study's novel contribution was to explore ToM development in deaf children with cochlear implants. Implants can substantially boost auditory acuity and rates of language growth. Despite the implant, there are often problems socialising with hearing peers and some language difficulties, lending special theoretical interest to the present comparative design. Methods: A total of 52 children aged 4 to 12 years took a battery of false belief tests of ToM. There were 26 oral deaf children, half with implants and half with hearing aids, evenly divided between oral-only versus sign-plus-oral schools. Comparison groups of age-matched high-functioning children with autism and younger hearing children were also included. Results: No significant ToM differences emerged between deaf children with implants and those with hearing aids, nor between those in oral-only versus sign-plus-oral schools. Nor did the deaf children perform any better on the ToM tasks than their age peers with autism. Hearing preschoolers scored significantly higher than all other groups. For the deaf and the autistic children, as well as the preschoolers, rate of language development and verbal maturity significantly predicted variability in ToM, over and above chronological age. Conclusions: The finding that deaf children with cochlear implants are as delayed in ToM development as children with autism and their deaf peers with hearing aids or late sign language highlights the likely significance of peer interaction and early fluent communication with peers and family, whether in sign or in speech, in order to optimally facilitate the growth of social cognition and language.
Resumo:
Statistical tests of Load-Unload Response Ratio (LURR) signals are carried in order to verify statistical robustness of the previous studies using the Lattice Solid Model (MORA et al., 2002b). In each case 24 groups of samples with the same macroscopic parameters (tidal perturbation amplitude A, period T and tectonic loading rate k) but different particle arrangements are employed. Results of uni-axial compression experiments show that before the normalized time of catastrophic failure, the ensemble average LURR value rises significantly, in agreement with the observations of high LURR prior to the large earthquakes. In shearing tests, two parameters are found to control the correlation between earthquake occurrence and tidal stress. One is, A/(kT) controlling the phase shift between the peak seismicity rate and the peak amplitude of the perturbation stress. With an increase of this parameter, the phase shift is found to decrease. Another parameter, AT/k, controls the height of the probability density function (Pdf) of modeled seismicity. As this parameter increases, the Pdf becomes sharper and narrower, indicating a strong triggering. Statistical studies of LURR signals in shearing tests also suggest that except in strong triggering cases, where LURR cannot be calculated due to poor data in unloading cycles, the larger events are more likely to occur in higher LURR periods than the smaller ones, supporting the LURR hypothesis.
Resumo:
This paper explores the theoretical and policy implications of contemporary American hegemony. A key argument is that the development of US hegemony generally, and the distinctive turn in US foreign policy that has occurred in the wake of 11 September in particular, can best be understood by placing recent events in a comparative and historical framework. The immediate post-World War II order laid the foundations of a highly institutionalised multilateral system that provided key benefits for a number of countries while simultaneously constraining and enhancing US power. An historical reading of US hegemony suggests that its recent unilateralism is undermining the foundations of its power and influence.
Resumo:
To investigate the control mechanisms used in adapting to position-dependent forces, subjects performed 150 horizontal reaching movements over 25 cm in the presence of a position-dependent parabolic force field (PF). The PF acted only over the first 10 cm of the movement. On every fifth trial, a virtual mechanical guide (double wall) constrained subjects to move along a straight-line path between the start and target positions. Its purpose was to register lateral force to track formation of an internal model of the force field, and to look for evidence of possible alternative adaptive strategies. The force field produced a force to the right, which initially caused subjects to deviate in that direction. They reacted by producing deviations to the left, into the force field, as early as the second trial. Further adaptation resulted in rapid exponential reduction of kinematic error in the latter portion of the movement, where the greatest perturbation to the handpath was initially observed, whereas there was little modification of the handpath in the region where the PF was active. Significant force directed to counteract the PF was measured on the first guided trial, and was modified during the first half of the learning set. The total force impulse in the region of the PF increased throughout the learning trials, but it always remained less than that produced by the PF. The force profile did not resemble a mirror image of the PF in that it tended to be more trapezoidal than parabolic in shape. As in previous studies of force-field adaptation, we found that changes in muscle activation involved a general increase in the activity of all muscles, which increased arm stiffness, and selectively-greater increases in the activation of muscles which counteracted the PF. With training, activation was exponentially reduced, albeit more slowly than kinematic error. Progressive changes in kinematics and EMG occurred predominantly in the region of the workspace beyond the force field. We suggest that constraints on muscle mechanics limit the ability of the central nervous system to employ an inverse dynamics model to nullify impulse-like forces by generating mirror-image forces. Consequently, subjects adopted a strategy of slightly overcompensating for the first half of the force field, then allowing the force field to push them in the opposite direction. Muscle activity patterns in the region beyond the boundary of the force field were subsequently adjusted because of the relatively-slow response of the second-order mechanics of muscle impedance to the force impulse.
Resumo:
This paper re-examines the stability of multi-input multi-output (MIMO) control systems designed using sequential MIMO quantitative feedback theory (QFT). In order to establish the results, recursive design equations for the SISO equivalent plants employed in a sequential MIMO QFT design are established. The equations apply to sequential MIMO QFT designs in both the direct plant domain, which employs the elements of plant in the design, and the inverse plant domain, which employs the elements of the plant inverse in the design. Stability theorems that employ necessary and sufficient conditions for robust closed-loop internal stability are developed for sequential MIMO QFT designs in both domains. The theorems and design equations facilitate less conservative designs and improved design transparency.
Resumo:
We apply the projected Gross-Pitaevskii equation (PGPE) formalism to the experimental problem of the shift in critical temperature T-c of a harmonically confined Bose gas as reported in Gerbier , Phys. Rev. Lett. 92, 030405 (2004). The PGPE method includes critical fluctuations and we find the results differ from various mean-field theories, and are in best agreement with experimental data. To unequivocally observe beyond mean-field effects, however, the experimental precision must either improve by an order of magnitude, or consider more strongly interacting systems. This is the first application of a classical field method to make quantitative comparison with experiment.
Resumo:
Objective: To validate the unidimensionality of the Action Research Arm Test (ARAT) using Mokken analysis and to examine whether scores of the ARAT can be transformed into interval scores using Rasch analysis. Subjects and methods: A total of 351 patients with stroke were recruited from 5 rehabilitation departments located in 4 regions of Taiwan. The 19-item ARAT was administered to all the subjects by a physical therapist. The data were analysed using item response theory by non-parametric Mokken analysis followed by Rasch analysis. Results: The results supported a unidimensional scale of the 19-item ARAT by Mokken analysis, with the scalability coefficient H = 0.95. Except for the item pinch ball bearing 3rd finger and thumb'', the remaining 18 items have a consistently hierarchical order along the upper extremity function's continuum. In contrast, the Rasch analysis, with a stepwise deletion of misfit items, showed that only 4 items (grasp ball'', grasp block 5 cm(3)'', grasp block 2.5 cm(3)'', and grip tube 1 cm(3)'') fit the Rasch rating scale model's expectations. Conclusion: Our findings indicated that the 19-item ARAT constituted a unidimensional construct measuring upper extremity function in stroke patients. However, the results did not support the premise that the raw sum scores of the ARAT can be transformed into interval Rasch scores. Thus, the raw sum scores of the ARAT can provide information only about order of patients on their upper extremity functional abilities, but not represent each patient's exact functioning.
Resumo:
This study assessed the theory of mind (ToM) and executive functioning (EF) abilities of 124 typically developing preschool children aged 3 to 5 years in relation to whether or not they had a child-aged sibling (i.e. a child aged 1 to 12 years) at home with whom to play and converse. On a ToM battery that included tests of false belief, appearance-reality (AR) and pretend representation, children who had at least 1 child-aged sibling scored significantly higher than both only children and those whose only siblings were infants or adults. The numbers of child-aged siblings in preschoolers' families positively predicted their scores on both a ToM battery (4 tasks) and an EF battery (2 tasks), and these associations remained significant with language ability partialled out. Results of a hierarchical multiple regression analysis revealed that independent contributions to individual differences in ToM were made by language ability, EF skill and having a child-aged sibling. However, even though some conditions for mediation were met, there was no statistically reliable evidence that EF skills mediated the advantage of presence of child-aged siblings for ToM performance. While consistent with the theory that distinctively childish interaction among siblings accelerates the growth of both TOM and EF capacities, alternative evidence and alternative theoretical interpretations for the findings were also considered.
Resumo:
In this thesis work we develop a new generative model of social networks belonging to the family of Time Varying Networks. The importance of correctly modelling the mechanisms shaping the growth of a network and the dynamics of the edges activation and inactivation are of central importance in network science. Indeed, by means of generative models that mimic the real-world dynamics of contacts in social networks it is possible to forecast the outcome of an epidemic process, optimize the immunization campaign or optimally spread an information among individuals. This task can now be tackled taking advantage of the recent availability of large-scale, high-quality and time-resolved datasets. This wealth of digital data has allowed to deepen our understanding of the structure and properties of many real-world networks. Moreover, the empirical evidence of a temporal dimension in networks prompted the switch of paradigm from a static representation of graphs to a time varying one. In this work we exploit the Activity-Driven paradigm (a modeling tool belonging to the family of Time-Varying-Networks) to develop a general dynamical model that encodes fundamental mechanism shaping the social networks' topology and its temporal structure: social capital allocation and burstiness. The former accounts for the fact that individuals does not randomly invest their time and social interactions but they rather allocate it toward already known nodes of the network. The latter accounts for the heavy-tailed distributions of the inter-event time in social networks. We then empirically measure the properties of these two mechanisms from seven real-world datasets and develop a data-driven model, analytically solving it. We then check the results against numerical simulations and test our predictions with real-world datasets, finding a good agreement between the two. Moreover, we find and characterize a non-trivial interplay between burstiness and social capital allocation in the parameters phase space. Finally, we present a novel approach to the development of a complete generative model of Time-Varying-Networks. This model is inspired by the Kaufman's adjacent possible theory and is based on a generalized version of the Polya's urn. Remarkably, most of the complex and heterogeneous feature of real-world social networks are naturally reproduced by this dynamical model, together with many high-order topological properties (clustering coefficient, community structure etc.).