20 resultados para Advent.

em Indian Institute of Science - Bangalore - Índia


Relevância:

10.00% 10.00%

Publicador:

Resumo:

When a uniform flow of any nature is interrupted, the readjustment of the flow results in concentrations and rare-factions, so that the peak value of the flow parameter will be higher than that which an elementary computation would suggest. When stress flow in a structure is interrupted, there are stress concentrations. These are generally localized and often large, in relation to the values indicated by simple equilibrium calculations. With the advent of the industrial revolution, dynamic and repeated loading of materials had become commonplace in engine parts and fast moving vehicles of locomotion. This led to serious fatigue failures arising from stress concentrations. Also, many metal forming processes, fabrication techniques and weak-link type safety systems benefit substantially from the intelligent use or avoidance, as appropriate, of stress concentrations. As a result, in the last 80 years, the study and and evaluation of stress concentrations has been a primary objective in the study of solid mechanics. Exact mathematical analysis of stress concentrations in finite bodies presents considerable difficulty for all but a few problems of infinite fields, concentric annuli and the like, treated under the presumption of small deformation, linear elasticity. A whole series of techniques have been developed to deal with different classes of shapes and domains, causes and sources of concentration, material behaviour, phenomenological formulation, etc. These include real and complex functions, conformal mapping, transform techniques, integral equations, finite differences and relaxation, and, more recently, the finite element methods. With the advent of large high speed computers, development of finite element concepts and a good understanding of functional analysis, it is now, in principle, possible to obtain with economy satisfactory solutions to a whole range of concentration problems by intelligently combining theory and computer application. An example is the hybridization of continuum concepts with computer based finite element formulations. This new situation also makes possible a more direct approach to the problem of design which is the primary purpose of most engineering analyses. The trend would appear to be clear: the computer will shape the theory, analysis and design.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Kinetic studies of macromolecular ligand-ligate interaction have generated ample interest since the advent of plasmon resonance based instruments like BIAcore. Most of the studies reported in literature assume a simple 1 : 1 Langmuir binding and complete reversibility of the system. However we observed that in a high affinity antigen-antibody system [human chorionic gonadotropin-monoclonal antibody (hCG-mAb)] dissociation is insignificant and the sensogram data cannot be used to measure the equilibrium and kinetic parameters. At low concentrations of mAb the complete sensogram could be fitted to a single exponential. Interestingly we found that at higher mAb concentrations, the binding data did not conform to a simple bimolecular model. Instead, the data fitted a two-step model, which may be because of surface heterogeneity of affinity sites. In this paper, we report on the global fit of the sensograms. We have developed a method by which a single two-minute sensogram can be used in high affinity systems to measure the association rate constant of the reaction and the functional capacity of the ligand (hCG) immobilized on the chip. We provide a rational explanation for the discrepancies generally observed in most of the BIAcore sensograms

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In the recent time CFD tools have become increasingly useful in the engineering design studies especially in the area of aerospace vehicles. This is largely due to the advent of high speed computing platforms in addition to the development of new efficient algorithms. The algorithms based on kinetic schemes have been shown to be very robust and further meshless methods offer certain advantages over the other methods. Preliminary investigations of blood flow visualization through artery using CFD tool have shown encouraging results which further needs to be verified and validated.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The advent of large and fast digital computers and development of numerical techniques suited to these have made it possible to review the analysis of important fundamental and practical problems and phenomena of engineering which have remained intractable for a long time. The understanding of the load transfer between pin and plate is one such. Inspite of continuous attack on these problems for over half a century, classical solutions have remained limited in their approach and value to the understanding of the phenomena and the generation of design data. On the other hand, the finite element methods that have grown simultaneously with the recent development of computers have been helpful in analysing specific problems and answering specific questions, but are yet to be harnessed to assist in obtaining with economy a clearer understanding of the phenomena of partial separation and contact, friction and slip, and fretting and fatigue in pin joints. Against this background, it is useful to explore the application of the classical simple differential equation methods with the aid of computer power to open up this very important area. In this paper we describe some of the recent and current work at the Indian Institute of Science in this last direction.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

With the advent of VLSI it has become possible to map parallel algorithms for compute-bound problems directly on silicon. Systolic architecture is very good candidate for VLSI implementation because of its regular and simple design, and regular communication pattern. In this paper, a systolic algorithm and corresponding systolic architecture, a linear systolic array, for the scanline-based hidden surface removal problem in three-dimensional computer graphics have been proposed. The algorithm is based on the concept of sample spans or intervals. The worst case time taken by the algorithm is O(n), n being the number of segments in a scanline. The time taken by the algorithm for a given scene depends on the scene itself, and on an average considerable improvement over the worst case behaviour is expected. A pipeline scheme for handling the I/O process has also been proposed which is suitable for VLSI implementation of the algorithm.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The advent of high intensity lasers coupled with the recent advances in crystal technology has led to rapid progress in the field of nonlinear optics. This article traces the history of materials development that has taken place over the past forty odd years and dwells on the current status in this important area. The materials aspect is discussed under three classes viz. inorganic, organic and semiorganic crystals. In the end, some of the crystal growth work that has been carried out in author's laboratory is presented.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The study of reaction mechanisms involves systematic investigations of the correlation between structure, reactivity, and time. The challenge is to be able to observe the chemical changes undergone by reactants as they change into products via one or several intermediates such as electronic excited states (singlet and triplet), radicals, radical ions, carbocations, carbanions, carbenes, nitrenes, nitrinium ions, etc. The vast array of intermediates and timescales means there is no single ``do-it-all'' technique. The simultaneous advances in contemporary time-resolved Raman spectroscopic techniques and computational methods have done much towards visualizing molecular fingerprint snapshots of the reactive intermediates in the microsecond to femtosecond time domain. Raman spectroscopy and its sensitive counterpart resonance Raman spectroscopy have been well proven as means for determining molecular structure, chemical bonding, reactivity, and dynamics of short-lived intermediates in solution phase and are advantageous in comparison to commonly used time-resolved absorption and emission spectroscopy. Today time-resolved Raman spectroscopy is a mature technique; its development owes much to the advent of pulsed tunable lasers, highly efficient spectrometers, and high speed, highly sensitive multichannel detectors able to collect a complete spectrum. This review article will provide a brief chronological development of the experimental setup and demonstrate how experimentalists have conquered numerous challenges to obtain background-free (removing fluorescence), intense, and highly spectrally resolved Raman spectra in the nanosecond to microsecond (ns-mu s) and picosecond (ps) time domains and, perhaps surprisingly, laid the foundations for new techniques such as spatially offset Raman spectroscopy.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

With the advent of Internet, video over IP is gaining popularity. In such an environment, scalability and fault tolerance will be the key issues. Existing video on demand (VoD) service systems are usually neither scalable nor tolerant to server faults and hence fail to comply to multi-user, failure-prone networks such as the Internet. Current research areas concerning VoD often focus on increasing the throughput and reliability of single server, but rarely addresses the smooth provision of service during server as well as network failures. Reliable Server Pooling (RSerPool), being capable of providing high availability by using multiple redundant servers as single source point, can be a solution to overcome the above failures. During a possible server failure, the continuity of service is retained by another server. In order to achieve transparent failover, efficient state sharing is an important requirement. In this paper, we present an elegant, simple, efficient and scalable approach which has been developed to facilitate the transfer of state by the client itself, using extended cookie mechanism, which ensures that there is no noticeable change in disruption or the video quality.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Today 80 % of the content on the Web is in English, which is spoken by only 8% of the World population and 5% of Indian population. There is wealth of useful content in the various languages of the world other than English, which can be made available on the Internet. But, to date, for various reasons most of it is not yet available on the Internet. India itself has 18 officially recognized languages and scores of dialects. Although the medium of instruction for most of the higher education and research in India is English, substantial amount of literature by way of novels, textbooks, scholarly information are being generated in the other languages in the country. Many of the e-governance initiatives are in the respective state languages. In the past, support for different languages by the operating systems and the software packages were not very encouraging. However, with the advent of Unicode technology, operating systems and software packages are supporting almost all the major languages of the world that have scripts. In the work reported in this paper, we have explained the configuration changes that are needed for Eprints.org software to store multilingual content and to create a multilingual user interface.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Chronic recording of neural signals is indispensable in designing efficient brain–machine interfaces and to elucidate human neurophysiology. The advent of multichannel micro-electrode arrays has driven the need for electronics to record neural signals from many neurons. The dynamic range of the system can vary over time due to change in electrode–neuron distance and background noise. We propose a neural amplifier in UMC 130 nm, 1P8M complementary metal–oxide–semiconductor (CMOS) technology. It can be biased adaptively from 200 nA to 2 $mu{rm A}$, modulating input referred noise from 9.92 $mu{rm V}$ to 3.9 $mu{rm V}$. We also describe a low noise design technique which minimizes the noise contribution of the load circuitry. Optimum sizing of the input transistors minimizes the accentuation of the input referred noise of the amplifier and obviates the need of large input capacitance. The amplifier achieves a noise efficiency factor of 2.58. The amplifier can pass signal from 5 Hz to 7 kHz and the bandwidth of the amplifier can be tuned for rejecting low field potentials (LFP) and power line interference. The amplifier achieves a mid-band voltage gain of 37 dB. In vitro experiments are performed to validate the applicability of the neural low noise amplifier in neural recording systems.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The technological world has attained a new dimension with the advent of miniaturization and a major breakthrough has evolved in the form of moems, technically more advanced than mems. This breakthrough has paved way for the scientists to research and conceive their innovation. This paper presents a mathematical analysis of the wave propagation along the non-uniform waveguide with refractive index varying along the z axis implemented on the cantilever beam of MZI based moem accelerometer. Secondly the studies on the wave bends with minimum power loss focusing on two main aspects of bend angle and curvature angle is also presented.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Bangalore is one of the fastest growing cities in India and is branded as ‘Silicon Valley of India’ for heralding and spearheading the growth of Information Technology (IT) based industries in the country. With the advent and growth of IT industry, as well as numerous industries in other sectors and the onset of economic liberalisation since the early 1990s, Bangalore has taken lead in service-based industries fuelling substantial growth of the city both economically and spatially. Bangalore has become a cosmopolitan city attracting people and business alike, within and across nations. This profile notes the urban setting and provides an overview of the urban fabric, while discussing various prospects related to infrastructure and governance (Sudhira, et al. 2007).

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The advent and evolution of geohazard warning systems is a very interesting study. The two broad fields that are immediately visible are that of geohazard evaluation and subsequent warning dissemination. Evidently, the latter field lacks any systematic study or standards. Arbitrarily organized and vague data and information on warning techniques create confusion and indecision. The purpose of this review is to try and systematize the available bulk of information on warning systems so that meaningful insights can be derived through decidable flowcharts, and a developmental process can be undertaken. Hence, the methods and technologies for numerous geohazard warning systems have been assessed by putting them into suitable categories for better understanding of possible ways to analyze their efficacy as well as shortcomings. By establishing a classification scheme based on extent, control, time period, and advancements in technology, the geohazard warning systems available in any literature could be comprehensively analyzed and evaluated. Although major advancements have taken place in geohazard warning systems in recent times, they have been lacking a complete purpose. Some systems just assess the hazard and wait for other means to communicate, and some are designed only for communication and wait for the hazard information to be provided, which usually is after the mishap. Primarily, systems are left at the mercy of administrators and service providers and are not in real time. An integrated hazard evaluation and warning dissemination system could solve this problem. Warning systems have also suffered from complexity of nature, requirement of expert-level monitoring, extensive and dedicated infrastructural setups, and so on. The user community, which would greatly appreciate having a convenient, fast, and generalized warning methodology, is surveyed in this review. The review concludes with the future scope of research in the field of hazard warning systems and some suggestions for developing an efficient mechanism toward the development of an automated integrated geohazard warning system. DOI: 10.1061/(ASCE)NH.1527-6996.0000078. (C) 2012 American Society of Civil Engineers.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Chronic recording of neural signals is indispensable in designing efficient brain machine interfaces and in elucidating human neurophysiology. The advent of multichannel microelectrode arrays has driven the need for electronics to record neural signals from many neurons. The dynamic range of the system is limited by background system noise which varies over time. We propose a neural amplifier in UMC 130 nm, 2P8M CMOS technology. It can be biased adaptively from 200 nA to 2 uA, modulating input referred noise from 9.92 uV to 3.9 uV. We also describe a low noise design technique which minimizes the noise contribution of the load circuitry. The amplifier can pass signal from 5 Hz to 7 kHz while rejecting input DC offsets at electrode-electrolyte interface. The bandwidth of the amplifier can be tuned by the pseudo-resistor for selectively recording low field potentials (LFP) or extra cellular action potentials (EAP). The amplifier achieves a mid-band voltage gain of 37 dB and minimizes the attenuation of the signal from neuron to the gate of the input transistor. It is used in fully differential configuration to reject noise of bias circuitry and to achieve high PSRR.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Structural dynamics of dendritic spines is one of the key correlative measures of synaptic plasticity for encoding short-term and long-term memory. Optical studies of structural changes in brain tissue using confocal microscopy face difficulties of scattering. This results in low signal-to-noise ratio and thus limiting the imaging depth to few tens of microns. Multiphoton microscopy (MpM) overcomes this limitation by using low-energy photons to cause localized excitation and achieve high resolution in all three dimensions. Multiple low-energy photons with longer wavelengths minimize scattering and allow access to deeper brain regions at several hundred microns. In this article, we provide a basic understanding of the physical phenomena that give MpM an edge over conventional microscopy. Further, we highlight a few of the key studies in the field of learning and memory which would not have been possible without the advent of MpM.