948 resultados para cosmological parameters from CMBR
Resumo:
A comparative study involving both experimental and numerical investigations was made to resolve a long-standing problem of understanding electron conductivity mechanism across magnetic field in low-temperature plasmas. We have calculated the plasma parameters from experimentally obtained electric field distribution, and then made a 'back' comparison with the distributions of electron energy and plasma density obtained in the experiment. This approach significantly reduces an influence of the assumption about particular phenomenology of the electron conductivity in plasma. The results of the experiment and calculations made by this technique have showed that the classical conductivity is not capable of providing realistic total current and electron energy, whereas the phenomenological anomalous Bohm mobility has demonstrated a very good agreement with the experiment. These results provide an evidence in favor of the Bohm conductivity, thus making it possible to clarify this pressing long-living question about the main driving mechanism responsible for the electron transport in low-temperature plasmas.
Resumo:
It is well understood that that there is variation inherent in all testing techniques, and that all soil and rock materials also contain some degree of natural variability. Less consideration is normally given to variation associated with natural material heterogeneity within a site, or the relative condition of the material at the time of testing. This paper assesses the impact of spatial and temporal variability upon repeated insitu testing of a residual soil and rock profile present within a single residential site over a full calendar year, and thus range of seasonal conditions. From this repeated testing, the magnitude of spatial and temporal variation due to seasonal conditions has demonstrated that, depending on the selected location and moisture content of the subsurface at the time of testing, up to a 35% variation within the test results can be expected. The results have also demonstrated that the completed insitu test technique has a similarly large measurement and inherent variability error and, for the investigated site, up to a 60% variation in normalised results was observed. From these results, it is recommended that the frequency and timing of insitu tests should be considered when deriving geotechnical design parameters from a limited data set.
Resumo:
This paper addresses the development of trust in the use of Open Data through incorporation of appropriate authentication and integrity parameters for use by end user Open Data application developers in an architecture for trustworthy Open Data Services. The advantages of this architecture scheme is that it is far more scalable, not another certificate-based hierarchy that has problems with certificate revocation management. With the use of a Public File, if the key is compromised: it is a simple matter of the single responsible entity replacing the key pair with a new one and re-performing the data file signing process. Under this proposed architecture, the the Open Data environment does not interfere with the internal security schemes that might be employed by the entity. However, this architecture incorporates, when needed, parameters from the entity, e.g. person who authorized publishing as Open Data, at the time that datasets are created/added.
Resumo:
Structural identification (St-Id) can be considered as the process of updating a finite element (FE) model of a structural system to match the measured response of the structure. This paper presents the St-Id of a laboratory-based steel through-truss cantilevered bridge with suspended span. There are a total of 600 degrees of freedom (DOFs) in the superstructure plus additional DOFs in the substructure. The St-Id of the bridge model used the modal parameters from a preliminary modal test in the objective function of a global optimisation technique using a layered genetic algorithm with patternsearch step (GAPS). Each layer of the St-Id process involved grouping of the structural parameters into a number of updating parameters and running parallel optimisations. The number of updating parameters was increased at each layer of the process. In order to accelerate the optimisation and ensure improved diversity within the population, a patternsearch step was applied to the fittest individuals at the end of each generation of the GA. The GAPS process was able to replicate the mode shapes for the first two lateral sway modes and the first vertical bending mode to a high degree of accuracy and, to a lesser degree, the mode shape of the first lateral bending mode. The mode shape and frequency of the torsional mode did not match very well. The frequencies of the first lateral bending mode, the first longitudinal mode and the first vertical mode matched very well. The frequency of the first sway mode was lower and that of the second sway mode was higher than the true values, indicating a possible problem with the FE model. Improvements to the model and the St-Id process will be presented at the upcoming conference and compared to the results presented in this paper. These improvements will include the use of multiple FE models in a multi-layered, multi-solution, GAPS St-Id approach.
Resumo:
There is an error in the JANAF (1985) data on the standard enthalpy, Gibbs energy and equilibrium constant for the formation of C2H2 (g) from elements. The error has arisen on account of an incorrect expression used for computing these parameters from the heat capacity, entropy and the relative heat content. Presented in this paper are the corrected values of the enthalpy, the Gibbs energy of formation and the corresponding equilibrium constant.
Resumo:
Recently, it has been shown that the inclusion of higher signal harmonics in the inspiral signals of binary supermassive black holes (SMBH) leads to dramatic improvements in the parameter estimation with Laser Interferometer Space Antenna (LISA). In particular, the angular resolution becomes good enough to identify the host galaxy or galaxy cluster, in which case the redshift can be determined by electromagnetic means. The gravitational wave signal also provides the luminosity distance with high accuracy, and the relationship between this and the redshift depends sensitively on the cosmological parameters, such as the equation-of-state parameter w = p(DE)/rho(DE) of dark energy. Using binary SMBH events at z < 1 with appropriate masses and orientations, one would be able to constrain w to within a few per cent. We show that, if the measured sky location is folded into the error analysis, the uncertainty on w goes down by an additional factor of 2-3, leaving weak lensing as the only limiting factor in using LISA as a dark energy probe.
Resumo:
The von Bertalanffy growth model is extended to incorporate explanatory variables. The generalized model includes the switched growth model and the seasonal growth model as special cases, and can also be used to assess the tagging effect on growth. Distribution-free and consistent estimating functions are constructed for estimation of growth parameters from tag-recapture data in which age at release is unknown. This generalizes the work of James (1991, Biometrics 47 1519-1530) who considered the classical model and allowed for individual variability in growth. A real dataset from barramundi (Lates calcarifer) is analysed to estimate the growth parameters and possible effect of tagging on growth.
Resumo:
Models that implement the bio-physical components of agro-ecosystems are ideally suited for exploring sustainability issues in cropping systems. Sustainability may be represented as a number of objectives to be maximised or minimised. However, the full decision space of these objectives is usually very large and simplifications are necessary to safeguard computational feasibility. Different optimisation approaches have been proposed in the literature, usually based on mathematical programming techniques. Here, we present a search approach based on a multiobjective evaluation technique within an evolutionary algorithm (EA), linked to the APSIM cropping systems model. A simple case study addressing crop choice and sowing rules in North-East Australian cropping systems is used to illustrate the methodology. Sustainability of these systems is evaluated in terms of economic performance and resource use. Due to the limited size of this sample problem, the quality of the EA optimisation can be assessed by comparison to the full problem domain. Results demonstrate that the EA procedure, parameterised with generic parameters from the literature, converges to a useable solution set within a reasonable amount of time. Frontier ‘‘peels’’ or Pareto-optimal solutions as described by the multiobjective evaluation procedure provide useful information for discussion on trade-offs between conflicting objectives.
Resumo:
We obtain stringent bounds in the < r(2)>(K pi)(S)-c plane where these are the scalar radius and the curvature parameters of the scalar K pi form factor, respectively, using analyticity and dispersion relation constraints, the knowledge of the form factor from the well-known Callan-Treiman point m(K)(2)-m(pi)(2), as well as at m(pi)(2)-m(K)(2), which we call the second Callan-Treiman point. The central values of these parameters from a recent determination are accomodated in the allowed region provided the higher loop corrections to the value of th form factor at the second Callan-Treiman point reduce the one-loop result by about 3% with F-K/F-pi = 1.21. Such a variation in magnitude at the second Callan-Treiman point yields 0.12 fm(2) less than or similar to < r(2)>(K pi)(S) less than or similar to 0.21 fm(2) and 0.56 GeV-4 less than or similar to c less than or similar to 1.47 GeV-4 and a strong correlation between them. A smaller value of F-K/F-pi shifts both bounds to lower values.
Resumo:
The �Central Atoms� model presented by the authors in an earlier paper is extended to ternary silicate and alumino-silicate melts. The model is applied to the CaO-FeO-SiO2 and the CaO-Al2O3-SiO2 system. Use is made of the parameters from the relevant binaries only. The agreement between experimental and calculated isoactivity curves is good in all cases.
Resumo:
What can the statistical structure of natural images teach us about the human brain? Even though the visual cortex is one of the most studied parts of the brain, surprisingly little is known about how exactly images are processed to leave us with a coherent percept of the world around us, so we can recognize a friend or drive on a crowded street without any effort. By constructing probabilistic models of natural images, the goal of this thesis is to understand the structure of the stimulus that is the raison d etre for the visual system. Following the hypothesis that the optimal processing has to be matched to the structure of that stimulus, we attempt to derive computational principles, features that the visual system should compute, and properties that cells in the visual system should have. Starting from machine learning techniques such as principal component analysis and independent component analysis we construct a variety of sta- tistical models to discover structure in natural images that can be linked to receptive field properties of neurons in primary visual cortex such as simple and complex cells. We show that by representing images with phase invariant, complex cell-like units, a better statistical description of the vi- sual environment is obtained than with linear simple cell units, and that complex cell pooling can be learned by estimating both layers of a two-layer model of natural images. We investigate how a simplified model of the processing in the retina, where adaptation and contrast normalization take place, is connected to the nat- ural stimulus statistics. Analyzing the effect that retinal gain control has on later cortical processing, we propose a novel method to perform gain control in a data-driven way. Finally we show how models like those pre- sented here can be extended to capture whole visual scenes rather than just small image patches. By using a Markov random field approach we can model images of arbitrary size, while still being able to estimate the model parameters from the data.
Resumo:
Differential scanning calorimetric studies on ammonium perchlorate have been carried out. The enthalpy values for the phase transition endotherm and the two exotherms have been reported in the present communication. A new method has been developed for the estimation of kinetic parameters from DSC the mograms. The values for activation energy as calculated by the above method for low temperature and high temperature exotherms are in close agreement with literature values. The present studies also confirm the presence of small exothermic peaks at the initial stages of high temperature exotherm. Explanation for the same has been given.
Resumo:
Sensory nerve action potentials (SNAPs) and compound nerve action potentials (CNAPs) were recorded from 25 normal subjects and 21 hanseniasis patients following electrical stimulation of the median nerve at the wrist. The various nerve conduction parameters from the affected nerves of the patients were compared with those from the clinically normal nerves of patients as well as data from healthy individuals. Analysis of the data and clinical correlation studies indicate the suitability of amplitudes of the SNAPs and CNAPs rather than the nerve conduction velocities in better characterizing the neuropathy of the patients. Significantly reduced amplitudes of responses from clinically unaffected nerves of patients indicate an early stage of neuropathy, thus being of predictive value. Further, a discriminant classifier, trained on data from clinically affected nerves of patients, classified most of the data from clinically unaffected nerves of patients as abnormal. This indicates that clinical neurophysiological studies can reveal leprous neuropathy much before it becomes clinically evident by means of sensory or motor loss. A discriminant score involving only the parameters of motor threshold, amplitude of digit potential and palm nerve conduction velocity is able to classify almost all of the normal and abnormal responses. The authors hope that further confirmative studies might ultimately lead to the use of the study of distal sensory conduction in the upper limbs in possible screening of a population exposed to Mycobacterium leprae. On the other hand, misclassification of a normal person occurred and suggests that further refinement of the methods is necessary in order to facilitate wider use of the methods under held conditions.
Resumo:
A considerable amount of work has been dedicated on the development of analytical solutions for flow of chemical contaminants through soils. Most of the analytical solutions for complex transport problems are closed-form series solutions. The convergence of these solutions depends on the eigen values obtained from a corresponding transcendental equation. Thus, the difficulty in obtaining exact solutions from analytical models encourages the use of numerical solutions for the parameter estimation even though, the later models are computationally expensive. In this paper a combination of two swarm intelligence based algorithms are used for accurate estimation of design transport parameters from the closed-form analytical solutions. Estimation of eigen values from a transcendental equation is treated as a multimodal discontinuous function optimization problem. The eigen values are estimated using an algorithm derived based on glowworm swarm strategy. Parameter estimation of the inverse problem is handled using standard PSO algorithm. Integration of these two algorithms enables an accurate estimation of design parameters using closed-form analytical solutions. The present solver is applied to a real world inverse problem in environmental engineering. The inverse model based on swarm intelligence techniques is validated and the accuracy in parameter estimation is shown. The proposed solver quickly estimates the design parameters with a great precision.
Resumo:
The determination of the overconsolidation ratio (OCR) of clay deposits is an important task in geotechnical engineering practice. This paper examines the potential of a support vector machine (SVM) for predicting the OCR of clays from piezocone penetration test data. SVM is a statistical learning theory based on a structural risk minimization principle that minimizes both error and weight terms. The five input variables used for the SVM model for prediction of OCR are the corrected cone resistance (qt), vertical total stress (sigmav), hydrostatic pore pressure (u0), pore pressure at the cone tip (u1), and the pore pressure just above the cone base (u2). Sensitivity analysis has been performed to investigate the relative importance of each of the input parameters. From the sensitivity analysis, it is clear that qt=primary in situ data influenced by OCR followed by sigmav, u0, u2, and u1. Comparison between SVM and some of the traditional interpretation methods is also presented. The results of this study have shown that the SVM approach has the potential to be a practical tool for determination of OCR.