986 resultados para Euler number, Irreducible symplectic manifold, Lagrangian fibration, Moduli space
Resumo:
Underground tunnels are vulnerable to terrorist attacks which can cause collapse of the tunnel structures or at least extensive damage, requiring lengthy repairs. This paper treats the blast impact on a reinforced concrete segmental tunnel buried in soil under a number of parametric conditions; soil properties, soil cover, distance of explosive from the tunnel centreline and explosive weight and analyses the possible failure patterns. A fully coupled Fluid Structure Interaction (FSI) technique incorporating the Arbitrary Lagrangian-Eulerian (ALE) method is used in this study. Results indicate that the tunnel in saturated soil is more vulnerable to severe damage than that buried in either partially saturated soil or dry soil. The tunnel is also more vulnerable to surface explosions which occur directly above the centre of the tunnel than those that occur at any equivalent distances in the ground away from the tunnel centre. The research findings provide useful information on modeling, analysis, overall tunnel response and failure patterns of segmented tunnels subjected to blast loads. This information will guide future development and application of research in this field.
Resumo:
The development of innovative methods of stock assessment is a priority for State and Commonwealth fisheries agencies. It is driven by the need to facilitate sustainable exploitation of naturally occurring fisheries resources for the current and future economic, social and environmental well being of Australia. This project was initiated in this context and took advantage of considerable recent achievements in genomics that are shaping our comprehension of the DNA of humans and animals. The basic idea behind this project was that genetic estimates of effective population size, which can be made from empirical measurements of genetic drift, were equivalent to estimates of the successful number of spawners that is an important parameter in process of fisheries stock assessment. The broad objectives of this study were to 1. Critically evaluate a variety of mathematical methods of calculating effective spawner numbers (Ne) by a. conducting comprehensive computer simulations, and by b. analysis of empirical data collected from the Moreton Bay population of tiger prawns (P. esculentus). 2. Lay the groundwork for the application of the technology in the northern prawn fishery (NPF). 3. Produce software for the calculation of Ne, and to make it widely available. The project pulled together a range of mathematical models for estimating current effective population size from diverse sources. Some of them had been recently implemented with the latest statistical methods (eg. Bayesian framework Berthier, Beaumont et al. 2002), while others had lower profiles (eg. Pudovkin, Zaykin et al. 1996; Rousset and Raymond 1995). Computer code and later software with a user-friendly interface (NeEstimator) was produced to implement the methods. This was used as a basis for simulation experiments to evaluate the performance of the methods with an individual-based model of a prawn population. Following the guidelines suggested by computer simulations, the tiger prawn population in Moreton Bay (south-east Queensland) was sampled for genetic analysis with eight microsatellite loci in three successive spring spawning seasons in 2001, 2002 and 2003. As predicted by the simulations, the estimates had non-infinite upper confidence limits, which is a major achievement for the application of the method to a naturally-occurring, short generation, highly fecund invertebrate species. The genetic estimate of the number of successful spawners was around 1000 individuals in two consecutive years. This contrasts with about 500,000 prawns participating in spawning. It is not possible to distinguish successful from non-successful spawners so we suggest a high level of protection for the entire spawning population. We interpret the difference in numbers between successful and non-successful spawners as a large variation in the number of offspring per family that survive – a large number of families have no surviving offspring, while a few have a large number. We explored various ways in which Ne can be useful in fisheries management. It can be a surrogate for spawning population size, assuming the ratio between Ne and spawning population size has been previously calculated for that species. Alternatively, it can be a surrogate for recruitment, again assuming that the ratio between Ne and recruitment has been previously determined. The number of species that can be analysed in this way, however, is likely to be small because of species-specific life history requirements that need to be satisfied for accuracy. The most universal approach would be to integrate Ne with spawning stock-recruitment models, so that these models are more accurate when applied to fisheries populations. A pathway to achieve this was established in this project, which we predict will significantly improve fisheries sustainability in the future. Regardless of the success of integrating Ne into spawning stock-recruitment models, Ne could be used as a fisheries monitoring tool. Declines in spawning stock size or increases in natural or harvest mortality would be reflected by a decline in Ne. This would be good for data-poor fisheries and provides fishery independent information, however, we suggest a species-by-species approach. Some species may be too numerous or experiencing too much migration for the method to work. During the project two important theoretical studies of the simultaneous estimation of effective population size and migration were published (Vitalis and Couvet 2001b; Wang and Whitlock 2003). These methods, combined with collection of preliminary genetic data from the tiger prawn population in southern Gulf of Carpentaria population and a computer simulation study that evaluated the effect of differing reproductive strategies on genetic estimates, suggest that this technology could make an important contribution to the stock assessment process in the northern prawn fishery (NPF). Advances in the genomics world are rapid and already a cheaper, more reliable substitute for microsatellite loci in this technology is available. Digital data from single nucleotide polymorphisms (SNPs) are likely to super cede ‘analogue’ microsatellite data, making it cheaper and easier to apply the method to species with large population sizes.
Resumo:
The need for reexamination of the standard model of strong, weak, and electromagnetic interactions is discussed, especially with regard to 't Hooft's criterion of naturalness. It has been argued that theories with fundamental scalar fields tend to be unnatural at relatively low energies. There are two solutions to this problem: (i) a global supersymmetry, which ensures the absence of all the naturalness-violating effects associated with scalar fields, and (ii) composite structure of the scalar fields, which starts showing up at energy scales where unnatural effects would otherwise have appeared. With reference to the second solution, this article reviews the case for dynamical breaking of the gauge symmetry and the technicolor scheme for the composite Higgs boson. This new interaction, of the scaled-up quantum chromodynamic type, keeps the new set of fermions, the technifermions, together in the Higgs particles. It also provides masses for the electroweak gauge bosons W± and Z0 through technifermion condensate formation. In order to give masses to the ordinary fermions, a new interaction, the extended technicolor interaction, which would connect the ordinary fermions to the technifermions, is required. The extended technicolor group breaks down spontaneously to the technicolor group, possibly as a result of the "tumbling" mechanism, which is discussed here. In addition, the author presents schemes for the isospin breaking of mass matrices of ordinary quarks in the technicolor models. In generalized technicolor models with more than one doublet of technifermions or with more than one technicolor sector, we have additional low-lying degrees of freedom, the pseudo-Goldstone bosons. The pseudo-Goldstone bosons in the technicolor model of Dimopoulos are reviewed and their masses computed. In this context the vacuum alignment problem is also discussed. An effective Lagrangian is derived describing colorless low-lying degrees of freedom for models with two technicolor sectors in the combined limits of chiral symmetry and large number of colors and technicolors. Finally, the author discusses suppression of flavor-changing neutral currents in the extended technicolor models.
Resumo:
In this paper, we tackle the problem of unsupervised domain adaptation for classification. In the unsupervised scenario where no labeled samples from the target domain are provided, a popular approach consists in transforming the data such that the source and target distributions be- come similar. To compare the two distributions, existing approaches make use of the Maximum Mean Discrepancy (MMD). However, this does not exploit the fact that prob- ability distributions lie on a Riemannian manifold. Here, we propose to make better use of the structure of this man- ifold and rely on the distance on the manifold to compare the source and target distributions. In this framework, we introduce a sample selection method and a subspace-based method for unsupervised domain adaptation, and show that both these manifold-based techniques outperform the cor- responding approaches based on the MMD. Furthermore, we show that our subspace-based approach yields state-of- the-art results on a standard object recognition benchmark.
Resumo:
This paper is concerned the calculation of flame structure of one-dimensional laminar premixed flames using the technique of operator-splitting. The technique utilizes an explicit method of solution with one step Euler for chemistry and a novel probabilistic scheme for diffusion. The relationship between diffusion phenomenon and Gauss-Markoff process is exploited to obtain an unconditionally stable explicit difference scheme for diffusion. The method has been applied to (a) a model problem, (b) hydrazine decomposition, (c) a hydrogen-oxygen system with 28 reactions with constant Dρ 2 approximation, and (d) a hydrogen-oxygen system (28 reactions) with trace diffusion approximation. Certain interesting aspects of behaviour of the solution with non-unity Lewis number are brought out in the case of hydrazine flame. The results of computation in the most complex case are shown to compare very favourably with those of Warnatz, both in terms of accuracy of results as well as computational time, thus showing that explicit methods can be effective in flame computations. Also computations using the Gear-Hindmarsh for chemistry and the present approach for diffusion have been carried out and comparison of the two methods is presented.
Resumo:
The Queensland Great Barrier Reef line fishery in Australia is regulated via a range of input and output controls including minimum size limits, daily catch limits and commercial catch quotas. As a result of these measures a substantial proportion of the catch is released or discarded. The fate of these released fish is uncertain, but hook-related mortality can potentially be decreased by using hooks that reduce the rates of injury, bleeding and deep hooking. There is also the potential to reduce the capture of non-target species though gear selectivity. A total of 1053 individual fish representing five target species and three non-target species were caught using six hook types including three hook patterns (non-offset circle, J and offset circle), each in two sizes (small 4/0 or 5/0 and large 8/0). Catch rates for each of the hook patterns and sizes varied between species with no consistent results for target or non-target species. When data for all of the fish species were aggregated there was a trend for larger hooks, J hooks and offset circle hooks to cause a greater number of injuries. Using larger hooks was more likely to result in bleeding, although this trend was not statistically significant. Larger hooks were also more likely to foul-hook fish or hook fish in the eye. There was a reduction in the rates of injuries and bleeding for both target and non-target species when using the smaller hook sizes. For a number of species included in our study the incidence of deep hooking decreased when using non-offset circle hooks, however, these results were not consistent for all species. Our results highlight the variability in hook performance across a range of tropical demersal finfish species. The most obvious conservation benefits for both target and non-target species arise from using smaller sized hooks and non-offset circle hooks. Fishers should be encouraged to use these hook configurations to reduce the potential for post-release mortality of released fish.
Resumo:
It was proposed earlier [P. L. Sachdev, K. R. C. Nair, and V. G. Tikekar, J. Math. Phys. 27, 1506 (1986)] that the Euler Painlevé equation yy[script `]+ay[script ']2+ f(x)yy[script ']+g(x) y2+by[script ']+c=0 represents the generalized Burgers equations (GBE's) in the same manner as Painlevé equations do the KdV type. The GBE was treated with a damping term in some detail. In this paper another GBE ut+uaux+Ju/2t =(gd/2)uxx (the nonplanar Burgers equation) is considered. It is found that its self-similar form is again governed by the Euler Painlevé equation. The ranges of the parameter alpha for which solutions of the connection problem to the self-similar equation exist are obtained numerically and confirmed via some integral relations derived from the ODE's. Special exact analytic solutions for the nonplanar Burgers equation are also obtained. These generalize the well-known single hump solutions for the Burgers equation to other geometries J=1,2; the nonlinear convection term, however, is not quadratic in these cases. This study fortifies the conjecture regarding the importance of the Euler Painlevé equation with respect to GBE's. Journal of Mathematical Physics is copyrighted by The American Institute of Physics.
Resumo:
Initial-value problems for the generalized Burgers equation (GBE) ut+u betaux+lambdaualpha =(delta/2)uxx are discussed for the single hump type of initial data both continuous and discontinuous. The numerical solution is carried to the self-similar ``intermediate asymptotic'' regime when the solution is given analytically by the self-similar form. The nonlinear (transformed) ordinary differential equations (ODE's) describing the self-similar form are generalizations of a class discussed by Euler and Painlevé and quoted by Kamke. These ODE's are new, and it is postulated that they characterize GBE's in the same manner as the Painlev equations categorize the Kortweg-de Vries (KdV) type. A connection problem for some related ODE's satisfying proper asymptotic conditions at x=±[infinity], is solved. The range of amplitude parameter is found for which the solution of the connection problem exists. The other solutions of the above GBE, which display several interesting features such as peaking, breaking, and a long shelf on the left for negative values of the damping coefficient lambda, are also discussed. The results are compared with those holding for the modified KdV equation with damping. Journal of Mathematical Physics is copyrighted by The American Institute of Physics.
Resumo:
This paper presents an inverse dynamic formulation by the Newton–Euler approach for the Stewart platform manipulator of the most general architecture and models all the dynamic and gravity effects as well as the viscous friction at the joints. It is shown that a proper elimination procedure results in a remarkably economical and fast algorithm for the solution of actuator forces, which makes the method quite suitable for on-line control purposes. In addition, the parallelism inherent in the manipulator and in the modelling makes the algorithm quite efficient in a parallel computing environment, where it can be made as fast as the corresponding formulation for the 6-dof serial manipulator. The formulation has been implemented in a program and has been used for a few trajectories planned for a test manipulator. Results of simulation presented in the paper reveal the nature of the variation of actuator forces in the Stewart platform and justify the dynamic modelling for control.
Resumo:
A general derivation of the coupling constant relations which result on embedding a non-simple group like SU L (2) @ U(1) in a larger simple group (or graded Lie group) is given. It is shown that such relations depend only on the requirement (i) that the multiplet of vector fields form an irreducible representation of the unifying algebra and (ii) the transformation properties of the fermions under SU L (2). This point is illustrated in two ways, one by constructing two different unification groups containing the same fermions and therefore have same Weinberg angle; the other by putting different SU L (2) structures on the same fermions and consequently have different Weinberg angles. In particular the value sin~0=3/8 is characteristic of the sequential doublet models or models which invoke a large number of additional leptons like E 6, while addition of extra charged fermion singlets can reduce the value of sin ~ 0 to 1/4. We point out that at the present time the models of grand unification are far from unique.
Resumo:
The number of bidders, N, involved in a construction procurement auction is known to have an important effect on the value of the lowest bid and the mark up applied by bidders. In practice, for example, it is important for a bidder to have a good estimate of N when bidding for a current contract. One approach, instigated by Friedman in 1956, is to make such an estimate by statistical analysis and modelling. Since then, however, finding a suitable model for N has been an enduring problem for researchers and, despite intensive research activity in the subsequent thirty years little progress has been made - due principally to the absence of new ideas and perspectives. This paper resumes the debate by checking old assumptions, providing new evidence relating to concomitant variables and proposing a new model. In doing this and in order to assure universality, a novel approach is developed and tested by using a unique set of twelve construction tender databases from four continents. This shows the new model provides a significant advancement on previous versions. Several new research questions are also posed and other approaches identified for future study.
Resumo:
For complex disease genetics research in human populations, remarkable progress has been made in recent times with the publication of a number of genome-wide association scans (GWAS) and subsequent statistical replications. These studies have identified new genes and pathways implicated in disease, many of which were not known before. Given these early successes, more GWAS are being conducted and planned, both for disease and quantitative phenotypes. Many researchers and clinicians have DNA samples available on collections of families, including both cases and controls. Twin registries around the world have facilitated the collection of large numbers of families, with DNA and multiple quantitative phenotypes collected on twin pairs and their relatives. In the design of a new GWAS with a fixed budget for the number of chips, the question arises whether to include or exclude related individuals. It is commonly believed to be preferable to use unrelated individuals in the first stage of a GWAS because relatives are 'over-matched' for genotypes. In this study, we quantify that for GWAS of a quantitative phenotype, relative to a sample of unrelated individuals surprisingly little power is lost when using relatives. The advantages of using relatives are manifold, including the ability to perform more quality control, the choice to perform within-family tests of association that are robust to population stratification, and the ability to perform joint linkage and association analysis. Therefore, the advantages of using relatives in GWAS for quantitative traits may well outweigh the small disadvantage in terms of statistical power.