971 resultados para semi-implicit projection method
Resumo:
Material synthesizing and characterization has been one of the major areas of scientific research for the past few decades. Various techniques have been suggested for the preparation and characterization of thin films and bulk samples according to the industrial and scientific applications. Material characterization implies the determination of the electrical, magnetic, optical or thermal properties of the material under study. Though it is possible to study all these properties of a material, we concentrate on the thermal and optical properties of certain polymers. The thermal properties are detennined using photothermal beam deflection technique and the optical properties are obtained from various spectroscopic analyses. In addition, thermal properties of a class of semiconducting compounds, copper delafossites, arc determined by photoacoustic technique.Photothermal technique is one of the most powerful tools for non-destructive characterization of materials. This forms a broad class of technique, which includes laser calorimetry, pyroelectric technique, photoacollstics, photothermal radiometric technique, photothermal beam deflection technique etc. However, the choice of a suitable technique depends upon the nature of sample and its environment, purpose of measurement, nature of light source used etc. The polynler samples under the present investigation are thermally thin and optically transparent at the excitation (pump beam) wavelength. Photothermal beam deflection technique is advantageous in that it can be used for the detennination of thermal diffusivity of samples irrespective of them being thermally thick or thennally thin and optically opaque or optically transparent. Hence of all the abovementioned techniques, photothemlal beam deflection technique is employed for the successful determination of thermal diffusivity of these polymer samples. However, the semi conducting samples studied are themlally thick and optically opaque and therefore, a much simpler photoacoustic technique is used for the thermal characterization.The production of polymer thin film samples has gained considerable attention for the past few years. Different techniques like plasma polymerization, electron bombardment, ultra violet irradiation and thermal evaporation can be used for the preparation of polymer thin films from their respective monomers. Among these, plasma polymerization or glow discharge polymerization has been widely lIsed for polymer thin fi Im preparation. At the earlier stages of the discovery, the plasma polymerization technique was not treated as a standard method for preparation of polymers. This method gained importance only when they were used to make special coatings on metals and began to be recognized as a technique for synthesizing polymers. Thc well-recognized concept of conventional polymerization is based on molecular processcs by which thc size of the molecule increases and rearrangemcnt of atoms within a molecule seldom occurs. However, polymer formation in plasma is recognized as an atomic process in contrast to the above molecular process. These films are pinhole free, highly branched and cross linked, heat resistant, exceptionally dielectric etc. The optical properties like the direct and indirect bandgaps, refractive indices etc of certain plasma polymerized thin films prepared are determined from the UV -VIS-NIR absorption and transmission spectra. The possible linkage in the formation of the polymers is suggested by comparing the FTIR spectra of the monomer and the polymer. The thermal diffusivity has been measured using the photothermal beam deflection technique as stated earlier. This technique measures the refractive index gradient established in the sample surface and in the adjacent coupling medium, by passing another optical beam (probe beam) through this region and hence the name probe beam deflection. The deflection is detected using a position sensitive detector and its output is fed to a lock-in-amplifIer from which the amplitude and phase of the deflection can be directly obtained. The amplitude and phase of the deflection signal is suitably analyzed for determining the thermal diffusivity.Another class of compounds under the present investigation is copper delafossites. These samples in the form of pellets are thermally thick and optically opaque. Thermal diffusivity of such semiconductors is investigated using the photoacoustic technique, which measures the pressure change using an elcctret microphone. The output of the microphone is fed to a lock-in-amplificr to obtain the amplitude and phase from which the thermal properties are obtained. The variation in thermal diffusivity with composition is studied.
Resumo:
A new geometry (semiannular) for Josephson junction has been proposed and theoretical studies have shown that the new geometry is useful for electronic applications [1, 2]. In this work we study the voltage‐current response of the junction with a periodic modulation. The fluxon experiences an oscillating potential in the presence of the ac‐bias which increases the depinning current value. We show that in a system with periodic boundary conditions, average progressive motion of fluxon commences after the amplitude of the ac drive exceeds a certain threshold value. The analytic studies are justified by simulating the equation using finite‐difference method. We observe creation and annihilation of fluxons in semiannular Josephson junction with an ac‐bias in the presence of an external magnetic field.
Resumo:
The present work deals with investigations on some technologically important polymer nanocomposite films and semi crystalline polypyrrole films.The work presented in the thesis deals with the realization of novel polymer nanocomposites with enhanced functionalities and prospects of applications in the fields related to nanophotonics. The development of inorganic/polymer nanocomposites is a rapidly expanding multidisciplinary research area with profound industrial applications. The incorporation of suitable inorganic nanoparticles can endow the resulting nanocomposites with excellent electrical, optical and mechanical properties. The first chapter gives a general introduction to nanotechnology, nanocomposites and conducting polymers. It also emphasizes the significance of ZnO among other semiconductor materials, which forms the inorganic filler in the polymer nanocomposites of the present study. This chapter also gives general ideas on the properties and applications of conducting polymers with special reference to polypyrrole. The objectives of the present investigations are also clearly addressed in this chapter. The second chapter deals with the theoretical aspects and details of all the experimental techniques used in the present work for the synthesis of polymer nanocomposites and polypyrrole samples and their various characterizations. Chapter 3 is based on the preparation and properties of ZnO/Polystyrene nanocomposite film samples. The optical properties of these nanocomoposite films are discussed in detail.Chapter 4 deals with the detailed investigations on the dependence of the optical properties of ZnO/PS nanocomposite films on the size of the nanostructured ZnO filler material. The excellent UV shielding properties of these nanocomposite films form the highlight of this chapter. Chapter 5 gives a detailed analysis of the nonlinear optical properties of ZnO/PS nanocomposite films using Z scan technique. The effect of ZnO particle size in the composite films on the nonlinear properties is discussed. The present study involves two phases of research activities. In the first phase, the linear and nonlinear optical properties of ZnO/polymer nanocomposites are investigated in detail. The second phase of work is centered on the synthesis and related studies on highly crystalline polypyrrole films. In the present study, nanosized ZnO is synthesized using wet chemical method at two different temperatures
Resumo:
As the technologies for the fabrication of high quality microarray advances rapidly, quantification of microarray data becomes a major task. Gridding is the first step in the analysis of microarray images for locating the subarrays and individual spots within each subarray. For accurate gridding of high-density microarray images, in the presence of contamination and background noise, precise calculation of parameters is essential. This paper presents an accurate fully automatic gridding method for locating suarrays and individual spots using the intensity projection profile of the most suitable subimage. The method is capable of processing the image without any user intervention and does not demand any input parameters as many other commercial and academic packages. According to results obtained, the accuracy of our algorithm is between 95-100% for microarray images with coefficient of variation less than two. Experimental results show that the method is capable of gridding microarray images with irregular spots, varying surface intensity distribution and with more than 50% contamination
Resumo:
The method of Least Squares is due to Carl Friedrich Gauss. The Gram-Schmidt orthogonalization method is of much younger date. A method for solving Least Squares Problems is developed which automatically results in the appearance of the Gram-Schmidt orthogonalizers. Given these orthogonalizers an induction-proof is available for solving Least Squares Problems.
Resumo:
The identification of chemical mechanism that can exhibit oscillatory phenomena in reaction networks are currently of intense interest. In particular, the parametric question of the existence of Hopf bifurcations has gained increasing popularity due to its relation to the oscillatory behavior around the fixed points. However, the detection of oscillations in high-dimensional systems and systems with constraints by the available symbolic methods has proven to be difficult. The development of new efficient methods are therefore required to tackle the complexity caused by the high-dimensionality and non-linearity of these systems. In this thesis, we mainly present efficient algorithmic methods to detect Hopf bifurcation fixed points in (bio)-chemical reaction networks with symbolic rate constants, thereby yielding information about their oscillatory behavior of the networks. The methods use the representations of the systems on convex coordinates that arise from stoichiometric network analysis. One of the methods called HoCoQ reduces the problem of determining the existence of Hopf bifurcation fixed points to a first-order formula over the ordered field of the reals that can then be solved using computational-logic packages. The second method called HoCaT uses ideas from tropical geometry to formulate a more efficient method that is incomplete in theory but worked very well for the attempted high-dimensional models involving more than 20 chemical species. The instability of reaction networks may lead to the oscillatory behaviour. Therefore, we investigate some criterions for their stability using convex coordinates and quantifier elimination techniques. We also study Muldowney's extension of the classical Bendixson-Dulac criterion for excluding periodic orbits to higher dimensions for polynomial vector fields and we discuss the use of simple conservation constraints and the use of parametric constraints for describing simple convex polytopes on which periodic orbits can be excluded by Muldowney's criteria. All developed algorithms have been integrated into a common software framework called PoCaB (platform to explore bio- chemical reaction networks by algebraic methods) allowing for automated computation workflows from the problem descriptions. PoCaB also contains a database for the algebraic entities computed from the models of chemical reaction networks.
Resumo:
We study four measures of problem instance behavior that might account for the observed differences in interior-point method (IPM) iterations when these methods are used to solve semidefinite programming (SDP) problem instances: (i) an aggregate geometry measure related to the primal and dual feasible regions (aspect ratios) and norms of the optimal solutions, (ii) the (Renegar-) condition measure C(d) of the data instance, (iii) a measure of the near-absence of strict complementarity of the optimal solution, and (iv) the level of degeneracy of the optimal solution. We compute these measures for the SDPLIB suite problem instances and measure the correlation between these measures and IPM iteration counts (solved using the software SDPT3) when the measures have finite values. Our conclusions are roughly as follows: the aggregate geometry measure is highly correlated with IPM iterations (CORR = 0.896), and is a very good predictor of IPM iterations, particularly for problem instances with solutions of small norm and aspect ratio. The condition measure C(d) is also correlated with IPM iterations, but less so than the aggregate geometry measure (CORR = 0.630). The near-absence of strict complementarity is weakly correlated with IPM iterations (CORR = 0.423). The level of degeneracy of the optimal solution is essentially uncorrelated with IPM iterations.
Resumo:
Obtaining automatic 3D profile of objects is one of the most important issues in computer vision. With this information, a large number of applications become feasible: from visual inspection of industrial parts to 3D reconstruction of the environment for mobile robots. In order to achieve 3D data, range finders can be used. Coded structured light approach is one of the most widely used techniques to retrieve 3D information of an unknown surface. An overview of the existing techniques as well as a new classification of patterns for structured light sensors is presented. This kind of systems belong to the group of active triangulation method, which are based on projecting a light pattern and imaging the illuminated scene from one or more points of view. Since the patterns are coded, correspondences between points of the image(s) and points of the projected pattern can be easily found. Once correspondences are found, a classical triangulation strategy between camera(s) and projector device leads to the reconstruction of the surface. Advantages and constraints of the different patterns are discussed
Resumo:
The absolute necessity of obtaining 3D information of structured and unknown environments in autonomous navigation reduce considerably the set of sensors that can be used. The necessity to know, at each time, the position of the mobile robot with respect to the scene is indispensable. Furthermore, this information must be obtained in the least computing time. Stereo vision is an attractive and widely used method, but, it is rather limited to make fast 3D surface maps, due to the correspondence problem. The spatial and temporal correspondence among images can be alleviated using a method based on structured light. This relationship can be directly found codifying the projected light; then each imaged region of the projected pattern carries the needed information to solve the correspondence problem. We present the most significant techniques, used in recent years, concerning the coded structured light method
Resumo:
Quantitatively assessing the importance or criticality of each link in a network is of practical value to operators, as that can help them to increase the network's resilience, provide more efficient services, or improve some other aspect of the service. Betweenness is a graph-theoretical measure of centrality that can be applied to communication networks to evaluate link importance. However, as we illustrate in this paper, the basic definition of betweenness centrality produces inaccurate estimations as it does not take into account some aspects relevant to networking, such as the heterogeneity in link capacity or the difference between node-pairs in their contribution to the total traffic. A new algorithm for discovering link centrality in transport networks is proposed in this paper. It requires only static or semi-static network and topology attributes, and yet produces estimations of good accuracy, as verified through extensive simulations. Its potential value is demonstrated by an example application. In the example, the simple shortest-path routing algorithm is improved in such a way that it outperforms other more advanced algorithms in terms of blocking ratio
Resumo:
In all biological processes, protein molecules and other small molecules interact to function and form transient macromolecular complexes. This interaction of two or more molecules can be described by a docking event. Docking is an important phase for structure-based drug design strategies, as it can be used as a method to simulate protein-ligand interactions. Various docking programs exist that allow automated docking, but most of them have limited visualization and user interaction. It would be advantageous if scientists could visualize the molecules participating in the docking process, manipulate their structures and manually dock them before submitting the new conformations to an automated docking process in an immersive environment, which can help stimulate the design/docking process. This also could greatly reduce docking time and resources. To achieve this, we propose a new virtual modelling/docking program, whereby the advantages of virtual modelling programs and the efficiency of the algorithms in existing docking programs will be merged.
Resumo:
The sustainability of cereal/legume intercropping was assessed by monitoring trends in grain yield, soil organic C (SOC) and soil extractable P (Olsen method) measured over 13 years at a long-term field trial on a P-deficient soil in semi-arid Kenya. Goat manure was applied annually for 13 years at 0, 5 and 10 t ha(-1) and trends in grain yield were not identifiable because of season-to-season variations. SOC and Olsen P increased for the first seven years of manure application and then remained constant. The residual effect of manure applied for four years only lasted another seven to eight years when assessed by yield, SOC and Olsen P. Mineral fertilizers provided the same annual rates of N and P as in 5 t ha(-1) manure and initially ,gave the same yield as manure, declining after nine years to about 80%. Therefore, manure applications could be made intermittently and nutrient requirements topped-up with fertilizers. Grain yields for sorghum with continuous manure were described well by correlations with rainfall and manure input only, if data were excluded for seasons with over 500 mm rainfall. A comprehensive simulation model should correctly describe crop losses caused by excess water.
Resumo:
Two experiments investigated the influence of implicit memory on consumer choice for brands with varying levels of familiarity. Priming was measured using a consideration-choice task, developed by Coates, Butler and Berry (2004). Experiment 1 employed a coupon-rating task at encoding that required participants to meaningfully process individual brand names, to assess whether priming could affect participants' final (preferred) choices for familiar brands. Experiment 2 used this same method to assess the impact of implicit memory on consideration and choice for unknown and leader brands, presented in conjunction with familiar competitors. Significant priming was obtained in both experiments, and was shown to directly influence final choice in the case of familiar and highly familiar leader brands. Moreover, it was shown that a single prior exposure could lead participants to consider buying an unknown, and indeed fictitious, brand. Copyright (c) 2006 John Wiley & Sons, Ltd.
Resumo:
This paper describes a new method for the assessment of palaeohydrology through the Holocene. A palaeoclimate model was linked with a hydrological model, using a weather generator to correct bias in the rainfall estimates, to simulate the changes in the flood frequency and the groundwater response through the late Pleistocene and Holocene for the Wadi Faynan in southern Jordan, a site considered internationally important due to its rich archaeological heritage spanning the Pleistocene and Holocene. This is the first study to describe the hydrological functioning of the Wadi Faynan, a meso-scale (241 km2) semi-arid catchment, setting this description within the framework of contemporary archaeological investigations. Historic meteorological records were collated and supplemented with new hydrological and water quality data. The modelled outcomes indicate that environmental changes, such as deforestation, had a major impact on the local water cycle and this amplified the effect of the prevailing climate on the flow regime. The results also show that increased rainfall alone does not necessarily imply better conditions for farming and highlight the importance of groundwater. The discussion focuses on the utility of the method and the importance of the local hydrology to the sustained settlement of the Wadi Faynan through pre-history and history.
Resumo:
We present an approach for dealing with coarse-resolution Earth observations (EO) in terrestrial ecosystem data assimilation schemes. The use of coarse-scale observations in ecological data assimilation schemes is complicated by spatial heterogeneity and nonlinear processes in natural ecosystems. If these complications are not appropriately dealt with, then the data assimilation will produce biased results. The “disaggregation” approach that we describe in this paper combines frequent coarse-resolution observations with temporally sparse fine-resolution measurements. We demonstrate the approach using a demonstration data set based on measurements of an Arctic ecosystem. In this example, normalized difference vegetation index observations are assimilated into a “zero-order” model of leaf area index and carbon uptake. The disaggregation approach conserves key ecosystem characteristics regardless of the observation resolution and estimates the carbon uptake to within 1% of the demonstration data set “truth.” Assimilating the same data in the normal manner, but without the disaggregation approach, results in carbon uptake being underestimated by 58% at an observation resolution of 250 m. The disaggregation method allows the combination of multiresolution EO and improves in spatial resolution if observations are located on a grid that shifts from one observation time to the next. Additionally, the approach is not tied to a particular data assimilation scheme, model, or EO product and can cope with complex observation distributions, as it makes no implicit assumptions of normality.