953 resultados para Computations
Resumo:
The research in software science has so far been concentrated on three measures of program complexity: (a) software effort; (b) cyclomatic complexity; and (c) program knots. In this paper we propose a measure of the logical complexity of programs in terms of the variable dependency of sequence of computations, inductive effort in writing loops and complexity of data structures. The proposed complexity mensure is described with the aid of a graph which exhibits diagrammatically the dependence of a computation at a node upon the computation of other (earlier) nodes. Complexity measures of several example programs have been computed and the related issues have been discussed. The paper also describes the role played by data structures in deciding the program complexity.
Resumo:
A fuzzy logic based centralized control algorithm for irrigation canals is presented. Purpose of the algorithm is to control downstream discharge and water level of pools in the canal, by adjusting discharge release from the upstream end and gates settings. The algorithm is based on the dynamic wave model (Saint-Venant equations) inversion in space, wherein the momentum equation is replaced by a fuzzy rule based model, while retaining the continuity equation in its complete form. The fuzzy rule based model is developed on fuzzification of a new mathematical model for wave velocity, the derivational details of which are given. The advantages of the fuzzy control algorithm, over other conventional control algorithms, are described. It is transparent and intuitive, and no linearizations of the governing equations are involved. Timing of the algorithm and method of computation are explained. It is shown that the tuning is easy and the computations are straightforward. The algorithm provides stable, realistic and robust outputs. The disadvantage of the algorithm is reduced precision in its outputs due to the approximation inherent in the fuzzy logic. Feed back control logic is adopted to eliminate error caused by the system disturbances as well as error caused by the reduced precision in the outputs. The algorithm is tested by applying it to water level control problem in a fictitious canal with a single pool and also in a real canal with a series of pools. It is found that results obtained from the algorithm are comparable to those obtained from conventional control algorithms.
Resumo:
The unsteady laminar compressible boundary-layer flow over two-dimensional and axisymmetric bodies at the stagnation point with mass transfer has been studied for all second-order boundary layer effects when the basic potential flow admits selfsimilarity. The solutions for the governing equations are obtained by using an implicit finite-difference scheme. Computations have been carried out for different values of the parameters characterizing the unsteadiness in the free stream velocity, wall temperature, mass transfer rate and variable gas properties. The results are found to be strongly affected by the unsteadiness in the free stream velocity. For large injection rates the second-orderboundary layer effects may prevail over the first-order boundary layer, but reverse is true for suction. The wall temperature and the variation of the density-viscosity product across the boundary layer appreciably change the skin-friction and heat-transfer rates due to second-order boundary-layer effects.
Resumo:
Layered graphitic materials exhibit new intriguing electronic structure and the search for new types of two-dimensional (2D) monolayer is of importance for the fabrication of next generation miniature electronic and optoelectronic devices. By means of density functional theory (DFT) computations, we investigated in detail the structural, electronic, mechanical and optical properties of the single-layer bismuth iodide (BiI3) nanosheet. Monolayer BiI3 is dynamically stable as confirmed by the computed phonon spectrum. The cleavage energy (Ecl) and interlayer coupling strength of bulk BiI3 are comparable to the experimental values of graphite, which indicates that the exfoliation of BiI3 is highly feasible. The obtained stress-strain curve shows that the BiI3 nanosheet is a brittle material with a breaking strain of 13%. The BiI3 monolayer has an indirect band gap of 1.57 eV with spin orbit coupling (SOC), indicating its potential application for solar cells. Furthermore, the band gap of BiI3 monolayer can be modulated by biaxial strain. Most interestingly, interfacing electrically active graphene with monolayer BiI3 nanosheet leads to enhanced light absorption compared to that in pure monolayer BiI3 nanosheet, highlighting its great potential applications in photonics and photovoltaic solar cells.
Resumo:
The unsteady laminar compressible three-dimensional stagnation-point boundary-layer flow with variable properties has been studied when the velocity of the incident stream, mass transfer and wall temperature vary arbitrarily with time. The second-order unsteady boundary-layer equations for all the effects have been derived by using the method of matched asymptotic expansions. Both nodal and saddle point flows as well as cold and hot wall cases have been considered. The partial differential equations governing the flow have been solved numerically using an implicit finite-difference scheme. Computations have been carried out for an accelerating stream, a decelerating stream and a fluctuating stream. The results indicate that the unsteady free stream velocity distributions, the nature of the stagnation point, the mass transfer, the wall temperature and the variation of the density-viscosity product across the boundary significantly affect the skin friction and heat transfer. The variation of the wall temperature with time strongly affects the heat transfer whereas its effect is comparatively less on skin friction. Suction increases the skin friction and heat transfer but injection does the opposite. The skin friction in the x direction due to the combined effects of first- and second-order boundary layers is less than the skin-friction in the x direction due to the first-order boundary layers for all the parameters. The overall skin friction in the z direction and heat transfer are more or less than the first-order boundary layers depending upon the values of the various parameters.
Resumo:
A numerical scheme is presented for accurate simulation of fluid flow using the lattice Boltzmann equation (LBE) on unstructured mesh. A finite volume approach is adopted to discretize the LBE on a cell-centered, arbitrary shaped, triangular tessellation. The formulation includes a formal, second order discretization using a Total Variation Diminishing (TVD) scheme for the terms representing advection of the distribution function in physical space, due to microscopic particle motion. The advantage of the LBE approach is exploited by implementing the scheme in a new computer code to run on a parallel computing system. Performance of the new formulation is systematically investigated by simulating four benchmark flows of increasing complexity, namely (1) flow in a plane channel, (2) unsteady Couette flow, (3) flow caused by a moving lid over a 2D square cavity and (4) flow over a circular cylinder. For each of these flows, the present scheme is validated with the results from Navier-Stokes computations as well as lattice Boltzmann simulations on regular mesh. It is shown that the scheme is robust and accurate for the different test problems studied.
Resumo:
It is shown that a leaky aquifer model can be used for well field analysis in hard rock areas, treating the upper weathered and clayey layers as a composite unconfined aquitard overlying a deeper fractured aquifer. Two long-duration pump test studies are reported in granitic and schist regions in the Vedavati river basin. The validity of simplifications in the analytical solution is verified by finite difference computations.
Resumo:
In this paper the kinematics of a curved shock of arbitrary strength has been discussed using the theory of generalised functions. This is the extension of Moslov’s work where he has considered isentropic flow even across the shock. The condition for a nontrivial jump in the flow variables gives the shock manifold equation (sme). An equation for the rate of change of shock strength along the shock rays (defined as the characteristics of the sme) has been obtained. This exact result is then compared with the approximate result of shock dynamics derived by Whitham. The comparison shows that the approximate equations of shock dynamics deviate considerably from the exact equations derived here. In the last section we have derived the conservation form of our shock dynamic equations. These conservation forms would be very useful in numerical computations as it would allow us to derive difference schemes for which it would not be necessary to fit the shock-shock explicitly.
Resumo:
Monopoles which are sources of non-Abelian magnetic flux are predicted by many models of grand unification. It has been argued elsewhere that a generic transformation of the "unbroken" symmetry group H cannot be globally implemented on such monopoles for reasons of topology. In this paper, we show that similar topological obstructions are encountered in the mechanics of a test particle in the field of these monopoles and that the transformations of H cannot all be globally implemented as canonical transformations. For the SU(5) model, if H is SU(3)C×U(1)em, a consequence is that color multiplets are not globally defined, while if H is SU(3)C×SU(2)WS×U(1)Y, the same is the case for both color and electroweak multiplets. There are, however, several subgroups KT, KT′,… of H which can be globally implemented, with the transformation laws of the observables differing from group to group in a novel way. For H=SU(3)C×U(1)em, a choice for KT is SU(2)C×U(1)em, while for H=SU(3)C×SU(2)WS×U(1)Y, a choice is SU(2)C×U(1)×U(1)×U(1). The paper also develops the differential geometry of monopoles in a form convenient for computations.
Resumo:
The Standard Model of particle physics consists of the quantum electrodynamics (QED) and the weak and strong nuclear interactions. The QED is the basis for molecular properties, and thus it defines much of the world we see. The weak nuclear interaction is responsible for decays of nuclei, among other things, and in principle, it should also effects at the molecular scale. The strong nuclear interaction is hidden in interactions inside nuclei. From the high-energy and atomic experiments it is known that the weak interaction does not conserve parity. Consequently, the weak interaction and specifically the exchange of the Z^0 boson between a nucleon and an electron induces small energy shifts of different sign for mirror image molecules. This in turn will make the other enantiomer of a molecule energetically favorable than the other and also shifts the spectral lines of the mirror image pair of molecules into different directions creating a split. Parity violation (PV) in molecules, however, has not been observed. The topic of this thesis is how the weak interaction affects certain molecular magnetic properties, namely certain parameters of nuclear magnetic resonance (NMR) and electron spin resonance (ESR) spectroscopies. The thesis consists of numerical estimates of NMR and ESR spectral parameters and investigations of the effects of different aspects of quantum chemical computations to them. PV contributions to the NMR shielding and spin-spin coupling constants are investigated from the computational point of view. All the aspects of quantum chemical electronic structure computations are found to be very important, which makes accurate computations challenging. Effects of molecular geometry are also investigated using a model system of polysilyene chains. PV contribution to the NMR shielding constant is found to saturate after the chain reaches a certain length, but the effects of local geometry can be large. Rigorous vibrational averaging is also performed for a relatively small and rigid molecule. Vibrational corrections to the PV contribution are found to be only a couple of per cents. PV contributions to the ESR g-tensor are also evaluated using a series of molecules. Unfortunately, all the estimates are below the experimental limits, but PV in some of the heavier molecules comes close to the present day experimental resolution.
Resumo:
In the present work the methods of relativistic quantum chemistry have been applied to a number of small systems containing heavy elements, for which relativistic effects are important. First, a thorough introduction of the methods used is presented. This includes some of the general methods of computational chemistry and a special section dealing with how to include the effects of relativity in quantum chemical calculations. Second, after this introduction the results obtained are presented. Investigations on high-valent mercury compounds are presented and new ways to synthesise such compounds are proposed. The methods described were applied to certain systems containing short Pt-Tl contacts. It was possible to explain the interesting bonding situation in these compounds. One of the most common actinide compounds, uranium hexafluoride was investigated and a new picture of the bonding was presented. Furthermore the rareness of uranium-cyanide compounds was discussed. In a foray into the chemistry of gold, well known for its strong relativistic effects, investigations on different gold systems were performed. Analogies between Au$^+$ and platinum on one hand and oxygen on the other were found. New systems with multiple bonds to gold were proposed to experimentalists. One of the proposed systems was spectroscopically observed shortly afterwards. A very interesting molecule, which was theoretically predicted a few years ago is WAu$_{12}$. Some of its properties were calculated and the bonding situation was discussed. In a further study on gold compounds it was possible to explain the substitution pattern in bis[phosphane-gold(I)] thiocyanate complexes. This is of some help to experimentalists as the systems could not be crystallised and the structure was therefore unknown. Finally, computations on one of the heaviest elements in the periodic table were performed. Calculation on compounds containing element 110, darmstadtium, showed that it behaves similarly as its lighter homologue platinum. The extreme importance of relativistic effects for these systems was also shown.
Resumo:
A wide range of models used in agriculture, ecology, carbon cycling, climate and other related studies require information on the amount of leaf material present in a given environment to correctly represent radiation, heat, momentum, water, and various gas exchanges with the overlying atmosphere or the underlying soil. Leaf area index (LAI) thus often features as a critical land surface variable in parameterisations of global and regional climate models, e.g., radiation uptake, precipitation interception, energy conversion, gas exchange and momentum, as all areas are substantially determined by the vegetation surface. Optical wavelengths of remote sensing are the common electromagnetic regions used for LAI estimations and generally for vegetation studies. The main purpose of this dissertation was to enhance the determination of LAI using close-range remote sensing (hemispherical photography), airborne remote sensing (high resolution colour and colour infrared imagery), and satellite remote sensing (high resolution SPOT 5 HRG imagery) optical observations. The commonly used light extinction models are applied at all levels of optical observations. For the sake of comparative analysis, LAI was further determined using statistical relationships between spectral vegetation index (SVI) and ground based LAI. The study areas of this dissertation focus on two regions, one located in Taita Hills, South-East Kenya characterised by tropical cloud forest and exotic plantations, and the other in Gatineau Park, Southern Quebec, Canada dominated by temperate hardwood forest. The sampling procedure of sky map of gap fraction and size from hemispherical photographs was proven to be one of the most crucial steps in the accurate determination of LAI. LAI and clumping index estimates were significantly affected by the variation of the size of sky segments for given zenith angle ranges. On sloping ground, gap fraction and size distributions present strong upslope/downslope asymmetry of foliage elements, and thus the correction and the sensitivity analysis for both LAI and clumping index computations were demonstrated. Several SVIs can be used for LAI mapping using empirical regression analysis provided that the sensitivities of SVIs at varying ranges of LAI are large enough. Large scale LAI inversion algorithms were demonstrated and were proven to be a considerably efficient alternative approach for LAI mapping. LAI can be estimated nonparametrically from the information contained solely in the remotely sensed dataset given that the upper-end (saturated SVI) value is accurately determined. However, further study is still required to devise a methodology as well as instrumentation to retrieve on-ground green leaf area index . Subsequently, the large scale LAI inversion algorithms presented in this work can be precisely validated. Finally, based on literature review and this dissertation, potential future research prospects and directions were recommended.
Resumo:
In the case of pipe trifurcation, previous observations report negative energy losses in the centre branch. This causes an anomaly, because there should not be any negative energy loss due to conservation of energy principle. Earlier investigators have suggested that this may be due to the non-inclusion of kinetic energy coefficient (a) in the computations of energy losses without any experimental evidence. In the present work, through experimentally determined velocity profiles, energy loss coefficients have been evaluated. It has been found that with the inclusion of a in the computations of energy loss, there is no negative energy loss in the centre branch.
Resumo:
In visual object detection and recognition, classifiers have two interesting characteristics: accuracy and speed. Accuracy depends on the complexity of the image features and classifier decision surfaces. Speed depends on the hardware and the computational effort required to use the features and decision surfaces. When attempts to increase accuracy lead to increases in complexity and effort, it is necessary to ask how much are we willing to pay for increased accuracy. For example, if increased computational effort implies quickly diminishing returns in accuracy, then those designing inexpensive surveillance applications cannot aim for maximum accuracy at any cost. It becomes necessary to find trade-offs between accuracy and effort. We study efficient classification of images depicting real-world objects and scenes. Classification is efficient when a classifier can be controlled so that the desired trade-off between accuracy and effort (speed) is achieved and unnecessary computations are avoided on a per input basis. A framework is proposed for understanding and modeling efficient classification of images. Classification is modeled as a tree-like process. In designing the framework, it is important to recognize what is essential and to avoid structures that are narrow in applicability. Earlier frameworks are lacking in this regard. The overall contribution is two-fold. First, the framework is presented, subjected to experiments, and shown to be satisfactory. Second, certain unconventional approaches are experimented with. This allows the separation of the essential from the conventional. To determine if the framework is satisfactory, three categories of questions are identified: trade-off optimization, classifier tree organization, and rules for delegation and confidence modeling. Questions and problems related to each category are addressed and empirical results are presented. For example, related to trade-off optimization, we address the problem of computational bottlenecks that limit the range of trade-offs. We also ask if accuracy versus effort trade-offs can be controlled after training. For another example, regarding classifier tree organization, we first consider the task of organizing a tree in a problem-specific manner. We then ask if problem-specific organization is necessary.
Resumo:
The paradigm of computational vision hypothesizes that any visual function -- such as the recognition of your grandparent -- can be replicated by computational processing of the visual input. What are these computations that the brain performs? What should or could they be? Working on the latter question, this dissertation takes the statistical approach, where the suitable computations are attempted to be learned from the natural visual data itself. In particular, we empirically study the computational processing that emerges from the statistical properties of the visual world and the constraints and objectives specified for the learning process. This thesis consists of an introduction and 7 peer-reviewed publications, where the purpose of the introduction is to illustrate the area of study to a reader who is not familiar with computational vision research. In the scope of the introduction, we will briefly overview the primary challenges to visual processing, as well as recall some of the current opinions on visual processing in the early visual systems of animals. Next, we describe the methodology we have used in our research, and discuss the presented results. We have included some additional remarks, speculations and conclusions to this discussion that were not featured in the original publications. We present the following results in the publications of this thesis. First, we empirically demonstrate that luminance and contrast are strongly dependent in natural images, contradicting previous theories suggesting that luminance and contrast were processed separately in natural systems due to their independence in the visual data. Second, we show that simple cell -like receptive fields of the primary visual cortex can be learned in the nonlinear contrast domain by maximization of independence. Further, we provide first-time reports of the emergence of conjunctive (corner-detecting) and subtractive (opponent orientation) processing due to nonlinear projection pursuit with simple objective functions related to sparseness and response energy optimization. Then, we show that attempting to extract independent components of nonlinear histogram statistics of a biologically plausible representation leads to projection directions that appear to differentiate between visual contexts. Such processing might be applicable for priming, \ie the selection and tuning of later visual processing. We continue by showing that a different kind of thresholded low-frequency priming can be learned and used to make object detection faster with little loss in accuracy. Finally, we show that in a computational object detection setting, nonlinearly gain-controlled visual features of medium complexity can be acquired sequentially as images are encountered and discarded. We present two online algorithms to perform this feature selection, and propose the idea that for artificial systems, some processing mechanisms could be selectable from the environment without optimizing the mechanisms themselves. In summary, this thesis explores learning visual processing on several levels. The learning can be understood as interplay of input data, model structures, learning objectives, and estimation algorithms. The presented work adds to the growing body of evidence showing that statistical methods can be used to acquire intuitively meaningful visual processing mechanisms. The work also presents some predictions and ideas regarding biological visual processing.