887 resultados para Beam Search Method
Resumo:
Adhesive bonding is nowadays a serious candidate to replace methods such as fastening or riveting, because of attractive mechanical properties. As a result, adhesives are being increasingly used in industries such as the automotive, aerospace and construction. Thus, it is highly important to predict the strength of bonded joints to assess the feasibility of joining during the fabrication process of components (e.g. due to complex geometries) or for repairing purposes. This work studies the tensile behaviour of adhesive joints between aluminium adherends considering different values of adherend thickness (h) and the double-cantilever beam (DCB) test. The experimental work consists of the definition of the tensile fracture toughness (GIC) for the different joint configurations. A conventional fracture characterization method was used, together with a J-integral approach, that take into account the plasticity effects occurring in the adhesive layer. An optical measurement method is used for the evaluation of crack tip opening and adherends rotation at the crack tip during the test, supported by a Matlab® sub-routine for the automated extraction of these quantities. As output of this work, a comparative evaluation between bonded systems with different values of adherend thickness is carried out and complete fracture data is provided in tension for the subsequent strength prediction of joints with identical conditions.
Resumo:
Self-dual doubly even linear binary error-correcting codes, often referred to as Type II codes, are codes closely related to many combinatorial structures such as 5-designs. Extremal codes are codes that have the largest possible minimum distance for a given length and dimension. The existence of an extremal (72,36,16) Type II code is still open. Previous results show that the automorphism group of a putative code C with the aforementioned properties has order 5 or dividing 24. In this work, we present a method and the results of an exhaustive search showing that such a code C cannot admit an automorphism group Z6. In addition, we present so far unpublished construction of the extended Golay code by P. Becker. We generalize the notion and provide example of another Type II code that can be obtained in this fashion. Consequently, we relate Becker's construction to the construction of binary Type II codes from codes over GF(2^r) via the Gray map.
Resumo:
Queueing system in which arriving customers who find all servers and waiting positions (if any) occupied many retry for service after a period of time are retrial queues or queues with repeated attempts. This study deals with two objectives one is to introduce orbital search in retrial queueing models which allows to minimize the idle time of the server. If the holding costs and cost of using the search of customers will be introduced, the results we obtained can be used for the optimal tuning of the parameters of the search mechanism. The second one is to provide insight of the link between the corresponding retrial queue and the classical queue. At the end we observe that when the search probability Pj = 1 for all j, the model reduces to the classical queue and when Pj = 0 for all j, the model becomes the retrial queue. It discusses the performance evaluation of single-server retrial queue. It was determined by using Poisson process. Then it discuss the structure of the busy period and its analysis interms of Laplace transforms and also provides a direct method of evaluation for the first and second moments of the busy period. Then it discusses the M/ PH/1 retrial queue with disaster to the unit in service and orbital search, and a multi-server retrial queueing model (MAP/M/c) with search of customers from the orbit. MAP is convenient tool to model both renewal and non-renewal arrivals. Finally the present model deals with back and forth movement between classical queue and retrial queue. In this model when orbit size increases, retrial rate also correspondingly increases thereby reducing the idle time of the server between services
Resumo:
In recent years scientists have made rapid and significant advances in the field of semiconductor physics. One of the most important fields of current interest in materials science is the fundamental aspects and applications of conducting transparent oxide thin films (TCO). The characteristic properties of such coatings are low electrical resistivity and high transparency in the visible region. The first semitransparent and electrically conducting CdO film was reported as early as in 1907 [1]. Though early work on these films was performed out of purely scientific interest, substantial technological advances in such films were made after 1940. The technological interest in the study of transparent semiconducting films was generated mainly due to the potential applications of these materials both in industry and research. Such films demonstrated their utility as transparent electrical heaters for windscreens in the aircraft industry. However, during the last decade, these conducting transparent films have been widely used in a variety of other applications such as gas sensors [2], solar cells [3], heat reflectors [4], light emitting devices [5] and laser damage resistant coatings in high power laser technology [6]. Just a few materials dominate the current TCO industry and the two dominant markets for TCO’s are in architectural applications and flat panel displays. The architectural use of TCO is for energy efficient windows. Fluorine doped tin oxide (FTO), deposited using a pyrolysis process is the TCO usually finds maximum application. SnO2 also finds application ad coatings for windows, which are efficient in preventing radiative heat loss, due to low emissivity (0.16). Pyrolitic tin oxide is used in PV modules, touch screens and plasma displays. However indium tin oxide (ITO) is mostly used in the majority of flat panel display (FPD) applications. In FPDs, the basic function of ITO is as transparent electrodes. The volume of FPD’s produced, and hence the volume of ITO coatings produced, continues to grow rapidly. But the current increase in the cost of indium and the scarcity of this material created the difficulty in obtaining low cost TCOs. Hence search for alternative TCO materials has been a topic of active research for the last few decades. This resulted in the development of binary materials like ZnO, SnO2, CdO and ternary materials like II Zn2SnO4, CdSb2O6:Y, ZnSO3, GaInO3 etc. The use of multicomponent oxide materials makes it possible to have TCO films suitable for specialized applications because by altering their chemical compositions, one can control the electrical, optical, chemical and physical properties. But the advantages of using binary materials are the easiness to control the chemical compositions and depositions conditions. Recently, there were reports claiming the deposition of CdO:In films with a resistivity of the order of 10-5 ohm cm for flat panel displays and solar cells. However they find limited use because of Cd-Toxicity. In this regard, ZnO films developed in 1980s, are very useful as these use Zn, an abundant, inexpensive and nontoxic material. Resistivity of this material is still not very low, but can be reduced through doping with group-III elements like In, Al or Ga or with F [6]. Hence there is a great interest in ZnO as an alternative of ITO. In the present study, we prepared and characterized transparent and conducting ZnO thin films, using a cost effective technique viz Chemical Spray Pyrolysis (CSP). This technique is also suitable for large area film deposition. It involves spraying a solution, (usually aqueous) containing soluble salts of the constituents of the desired compound, onto a heated substrate.
Resumo:
This 'study' deals with a preliminary study of automatic beam steering properly in conducting polyaniline . Polyaniline in its undoped and doped .state was prepared from aniline by the chemical oxidative polymerization method. Dielectric properties of the samples were studied at S-band microwave frequencies using cavity perturbation technique. It is found that undoped po/vanihne is having greater dielectric loss and conductivity contpared with the doped samples. The beam steering property is studied using a perspex rod antenna and HP 85/OC vector network analyzer. The shift in the radiated beam is studied for different do voltages. The results show that polyaniline is a good nutterial far beam steering applications.
Resumo:
Non-destructive testing (NDT) is the use of non-invasive techniques to determine the integrity of a material, component, or structure. Engineers and scientists use NDT in a variety of applications, including medical imaging, materials analysis, and process control.Photothermal beam deflection technique is one of the most promising NDT technologies. Tremendous R&D effort has been made for improving the efficiency and simplicity of this technique. It is a popular technique because it can probe surfaces irrespective of the size of the sample and its surroundings. This technique has been used to characterize several semiconductor materials, because of its non-destructive and non-contact evaluation strategy. Its application further extends to analysis of wide variety of materials. Instrumentation of a NDT technique is very crucial for any material analysis. Chapter two explores the various excitation sources, source modulation techniques, detection and signal processing schemes currently practised. The features of the experimental arrangement including the steps for alignment, automation, data acquisition and data analysis are explained giving due importance to details.Theoretical studies form the backbone of photothermal techniques. The outcome of a theoretical work is the foundation of an application.The reliability of the theoretical model developed and used is proven from the studies done on crystalline.The technique is applied for analysis of transport properties such as thermal diffusivity, mobility, surface recombination velocity and minority carrier life time of the material and thermal imaging of solar cell absorber layer materials like CuInS2, CuInSe2 and SnS thin films.analysis of In2S3 thin films, which are used as buffer layer material in solar cells. The various influences of film composition, chlorine and silver incorporation in this material is brought out from the measurement of transport properties and analysis of sub band gap levels.The application of photothermal deflection technique for characterization of solar cells is a relatively new area that requires considerable attention.The application of photothermal deflection technique for characterization of solar cells is a relatively new area that requires considerable attention. Chapter six thus elucidates the theoretical aspects of application of photothermal techniques for solar cell analysis. The experimental design and method for determination of solar cell efficiency, optimum load resistance and series resistance with results from the analysis of CuInS2/In2S3 based solar cell forms the skeleton of this chapter.
Resumo:
A sensitive method based on the principle of photothermal phenomena to study the energy transfer processes in organic dye mixtures is presented. A dual beam thermal lens method can be very effectively used as an alternate technique to determine the molecular distance between donor and acceptor in fluorescein–rhodamine B mixture using optical parametric oscillator.
Resumo:
A simple method based on laser beam deflection to study the variation of diffusion coefficient with concentration in a solution is presented. When a properly fanned out laser beam is passed through a rectangular cell filled with solution having concentration gradient, the emergent beam traces out a curved pattern on a screen. By taking measurements on the pattern at different concentrations, the variation of diffusion coefficient with concentration can be determined.
Resumo:
In this paper we report the use of the dual beam thermal lens technique as a quantitative method to determine absolute fluorescence quantum efficiency and concentration quenching of fluorescence emission from rhodamine 6G doped Poly(methyl methacrylate) (PMMA), prepared with different concentrations of the dye. A comparison of the present data with that reported in the literature indicates that the observed variation of fluorescence quantum yield with respect to the dye concentration follows a similar profile as in the earlier reported observations on rhodamine 6G in solution. The photodegradation of the dye molecules under cw laser excitation is also studied using the present method.
Resumo:
This thesis deals with the study of light beam propagation through different nonlinear media. Analytical and numerical methods are used to show the formation of solitonS in these media. Basic experiments have also been performed to show the formation of a self-written waveguide in a photopolymer. The variational method is used for the analytical analysis throughout the thesis. Numerical method based on the finite-difference forms of the original partial differential equation is used for the numerical analysis.In Chapter 2, we have studied two kinds of solitons, the (2 + 1) D spatial solitons and the (3 + l)D spatio-temporal solitons in a cubic-quintic medium in the presence of multiphoton ionization.In Chapter 3, we have studied the evolution of light beam through a different kind of nonlinear media, the photorcfractive polymer. We study modulational instability and beam propagation through a photorefractive polymer in the presence of absorption losses. The one dimensional beam propagation through the nonlinear medium is studied using variational and numerical methods. Stable soliton propagation is observed both analytically and numerically.Chapter 4 deals with the study of modulational instability in a photorefractive crystal in the presence of wave mixing effects. Modulational instability in a photorefractive medium is studied in the presence of two wave mixing. We then propose and derive a model for forward four wave mixing in the photorefractive medium and investigate the modulational instability induced by four wave mixing effects. By using the standard linear stability analysis the instability gain is obtained.Chapter 5 deals with the study of self-written waveguides. Besides the usual analytical analysis, basic experiments were done showing the formation of self-written waveguide in a photopolymer system. The formation of a directional coupler in a photopolymer system is studied theoretically in Chapter 6. We propose and study, using the variational approximation as well as numerical simulation, the evolution of a probe beam through a directional coupler formed in a photopolymer system.
Resumo:
The need for improved feed systems for large reflector antennas employed in Radio Astronomy and Satellite tracking spurred the interest in horn antenna research in the 1960's. The major requirements were to reduce spill over, cross-polarisation losses,and to enhance the aperture efficiency to the order of about 75-8O%L The search for such a feed culminated in the corrugated horn. The corrugat1e 1 horn triggered widespread interest and enthusiasm, and a large amount of work(32’34’49’5O’52’53’58’65’75’79)has already been done on this type of antennas. The properties of corrugated surfaces has been investigated in detail. It was strongly felt that the flange technique and the use of corrugated surfaces could be merged together to obtain the advantages of both. This is the idea behind the present work. Corrugations are made on the surface of flange elements. The effect of various corrugation parameters are studied. By varying the flange parameters, a good amount of data is collected and analysed to ascertain the effects of corrugated flanges. The measurements are repeated at various frequencies, in the X— and S-bands. The following parameters of the system were studied: (a) beam shaping (b) gain (c) variation of V.S.U.R. (d) possibility of obtaining circularly polarised radiation from the flanged horn. A theoretical explanation to the effects of corrugated flanges is attempted on the basis of the line-source theory. Even though this theory utilises a simplified model for the calculation of radiation patterns, fairly good agreement between the computed pattern and experimental results are observed.
Resumo:
The presence of microcalcifications in mammograms can be considered as an early indication of breast cancer. A fastfractal block coding method to model the mammograms fordetecting the presence of microcalcifications is presented in this paper. The conventional fractal image coding method takes enormous amount of time during the fractal block encoding.procedure. In the proposed method, the image is divided intoshade and non shade blocks based on the dynamic range, andonly non shade blocks are encoded using the fractal encodingtechnique. Since the number of image blocks is considerablyreduced in the matching domain search pool, a saving of97.996% of the encoding time is obtained as compared to theconventional fractal coding method, for modeling mammograms.The above developed mammograms are used for detectingmicrocalcifications and a diagnostic efficiency of 85.7% isobtained for the 28 mammograms used.
Resumo:
Study on variable stars is an important topic of modern astrophysics. After the invention of powerful telescopes and high resolving powered CCD’s, the variable star data is accumulating in the order of peta-bytes. The huge amount of data need lot of automated methods as well as human experts. This thesis is devoted to the data analysis on variable star’s astronomical time series data and hence belong to the inter-disciplinary topic, Astrostatistics. For an observer on earth, stars that have a change in apparent brightness over time are called variable stars. The variation in brightness may be regular (periodic), quasi periodic (semi-periodic) or irregular manner (aperiodic) and are caused by various reasons. In some cases, the variation is due to some internal thermo-nuclear processes, which are generally known as intrinsic vari- ables and in some other cases, it is due to some external processes, like eclipse or rotation, which are known as extrinsic variables. Intrinsic variables can be further grouped into pulsating variables, eruptive variables and flare stars. Extrinsic variables are grouped into eclipsing binary stars and chromospheri- cal stars. Pulsating variables can again classified into Cepheid, RR Lyrae, RV Tauri, Delta Scuti, Mira etc. The eruptive or cataclysmic variables are novae, supernovae, etc., which rarely occurs and are not periodic phenomena. Most of the other variations are periodic in nature. Variable stars can be observed through many ways such as photometry, spectrophotometry and spectroscopy. The sequence of photometric observa- xiv tions on variable stars produces time series data, which contains time, magni- tude and error. The plot between variable star’s apparent magnitude and time are known as light curve. If the time series data is folded on a period, the plot between apparent magnitude and phase is known as phased light curve. The unique shape of phased light curve is a characteristic of each type of variable star. One way to identify the type of variable star and to classify them is by visually looking at the phased light curve by an expert. For last several years, automated algorithms are used to classify a group of variable stars, with the help of computers. Research on variable stars can be divided into different stages like observa- tion, data reduction, data analysis, modeling and classification. The modeling on variable stars helps to determine the short-term and long-term behaviour and to construct theoretical models (for eg:- Wilson-Devinney model for eclips- ing binaries) and to derive stellar properties like mass, radius, luminosity, tem- perature, internal and external structure, chemical composition and evolution. The classification requires the determination of the basic parameters like pe- riod, amplitude and phase and also some other derived parameters. Out of these, period is the most important parameter since the wrong periods can lead to sparse light curves and misleading information. Time series analysis is a method of applying mathematical and statistical tests to data, to quantify the variation, understand the nature of time-varying phenomena, to gain physical understanding of the system and to predict future behavior of the system. Astronomical time series usually suffer from unevenly spaced time instants, varying error conditions and possibility of big gaps. This is due to daily varying daylight and the weather conditions for ground based observations and observations from space may suffer from the impact of cosmic ray particles. Many large scale astronomical surveys such as MACHO, OGLE, EROS, xv ROTSE, PLANET, Hipparcos, MISAO, NSVS, ASAS, Pan-STARRS, Ke- pler,ESA, Gaia, LSST, CRTS provide variable star’s time series data, even though their primary intention is not variable star observation. Center for Astrostatistics, Pennsylvania State University is established to help the astro- nomical community with the aid of statistical tools for harvesting and analysing archival data. Most of these surveys releases the data to the public for further analysis. There exist many period search algorithms through astronomical time se- ries analysis, which can be classified into parametric (assume some underlying distribution for data) and non-parametric (do not assume any statistical model like Gaussian etc.,) methods. Many of the parametric methods are based on variations of discrete Fourier transforms like Generalised Lomb-Scargle peri- odogram (GLSP) by Zechmeister(2009), Significant Spectrum (SigSpec) by Reegen(2007) etc. Non-parametric methods include Phase Dispersion Minimi- sation (PDM) by Stellingwerf(1978) and Cubic spline method by Akerlof(1994) etc. Even though most of the methods can be brought under automation, any of the method stated above could not fully recover the true periods. The wrong detection of period can be due to several reasons such as power leakage to other frequencies which is due to finite total interval, finite sampling interval and finite amount of data. Another problem is aliasing, which is due to the influence of regular sampling. Also spurious periods appear due to long gaps and power flow to harmonic frequencies is an inherent problem of Fourier methods. Hence obtaining the exact period of variable star from it’s time series data is still a difficult problem, in case of huge databases, when subjected to automation. As Matthew Templeton, AAVSO, states “Variable star data analysis is not always straightforward; large-scale, automated analysis design is non-trivial”. Derekas et al. 2007, Deb et.al. 2010 states “The processing of xvi huge amount of data in these databases is quite challenging, even when looking at seemingly small issues such as period determination and classification”. It will be beneficial for the variable star astronomical community, if basic parameters, such as period, amplitude and phase are obtained more accurately, when huge time series databases are subjected to automation. In the present thesis work, the theories of four popular period search methods are studied, the strength and weakness of these methods are evaluated by applying it on two survey databases and finally a modified form of cubic spline method is intro- duced to confirm the exact period of variable star. For the classification of new variable stars discovered and entering them in the “General Catalogue of Vari- able Stars” or other databases like “Variable Star Index“, the characteristics of the variability has to be quantified in term of variable star parameters.
Resumo:
To study the behaviour of beam-to-column composite connection more sophisticated finite element models is required, since component model has some severe limitations. In this research a generic finite element model for composite beam-to-column joint with welded connections is developed using current state of the art local modelling. Applying mechanically consistent scaling method, it can provide the constitutive relationship for a plane rectangular macro element with beam-type boundaries. Then, this defined macro element, which preserves local behaviour and allows for the transfer of five independent states between local and global models, can be implemented in high-accuracy frame analysis with the possibility of limit state checks. In order that macro element for scaling method can be used in practical manner, a generic geometry program as a new idea proposed in this study is also developed for this finite element model. With generic programming a set of global geometric variables can be input to generate a specific instance of the connection without much effort. The proposed finite element model generated by this generic programming is validated against testing results from University of Kaiserslautern. Finally, two illustrative examples for applying this macro element approach are presented. In the first example how to obtain the constitutive relationships of macro element is demonstrated. With certain assumptions for typical composite frame the constitutive relationships can be represented by bilinear laws for the macro bending and shear states that are then coupled by a two-dimensional surface law with yield and failure surfaces. In second example a scaling concept that combines sophisticated local models with a frame analysis using a macro element approach is presented as a practical application of this numerical model.
Resumo:
In this paper, we develop a novel index structure to support efficient approximate k-nearest neighbor (KNN) query in high-dimensional databases. In high-dimensional spaces, the computational cost of the distance (e.g., Euclidean distance) between two points contributes a dominant portion of the overall query response time for memory processing. To reduce the distance computation, we first propose a structure (BID) using BIt-Difference to answer approximate KNN query. The BID employs one bit to represent each feature vector of point and the number of bit-difference is used to prune the further points. To facilitate real dataset which is typically skewed, we enhance the BID mechanism with clustering, cluster adapted bitcoder and dimensional weight, named the BID⁺. Extensive experiments are conducted to show that our proposed method yields significant performance advantages over the existing index structures on both real life and synthetic high-dimensional datasets.