11 resultados para Human-machine systems
em Cochin University of Science
Resumo:
The present work deals with the development of primary cell culture and diploid cell lines from two fishes, such as Poecilia reticulata and Clarias gariepinus. The greatest difficulty experienced was the avoidance of bacterial and fungi contamination. Three types of cell cultures are commonly developed, primary cell culture, diploid cell lines and heteroploid cell lines. Primary cell culture obtained from the animal tissues that have been cultivated in vitro for the first time. They are characterized by the same chromosome number as parent tissue, cultivated in vitro for the first time, have wide range of virus susceptibility, usually not malignant, six chromatin retarded and do not grow as suspension cultures. Diploid cell lines arise from a primary cell culture at the time of subculturing. Diploid cell lines commercially used in virology are W1-38 (human embryonic lung), W1-26 (human embryonic lung) and HEX (Human embryonic kidney). Heteroploid cell lines have been subcultivated with less than 75% of the cells in the population having a diploid chromosome constitution. Tissue cultures have been extensively used in biomedical research. The main applications are in three areas, Karyological studies, Identification and study of hereditary metabolic disorders and Somatic cell genetics. Other applications are in virology and host-parasite relationships. In this study an attempt was made to preserve the ovarian tissue at low temperature in the presence of cryoprotectants so that the tissue can be retrieved at any time and a cell culture could be developed.
Resumo:
Neuroscience is the study of'tbe ne rvous system , including the i - ; . in, spinal cord and peripheral nerves . Neurons are the basic cells of the brain and nervous system which exerts its functional role through various neurotransmitters and receptor systems . The activity of a nen ren depends on the balance between the number of excitatory and inhibito r y processes affecting it, both processes occurring individually and sin ,tlte-' ,ieously. The functional bal,ince of different neurotransmitters such as Acct >>lcholine (Ach), Dopamine (DA), Serotonin (5-1-17), Nor epinepbri,te (N.1 j, Epinephrine (LPI), Glutamate and Gamma amino butyric acid (GA BA) regulates the growth , division and other vital functions ofa normal cell / organisin (Sudha, 1 998). The micro-environ ; nertt of the cell is controlled / the macro-environment that surrounds the individual. Any change in the cell environment causes imbalance in cell homeostasis and f,ntction. Pollution is a significant cause of imbalance caused iii the inacYcenvironment. Interaction with polluted environments can have an adverse impact on the health of humans. The alarming rise in enviromilmieil cont.iniin :rtion has been linked to rises in levels of pesticides, ndltstr al effluents, domestic Waste, car exhausts and other anthropogenic activities. Persistent exposures to contaminant cause a negative imp,-, on brain health and development . Pollution also causes a change in the neurotransmitters and their receptor function leading to earl.;' recurrence of neurodcge,terative disorders such as flypoxia , Alzbeimers's and Huntington 's disease early in life.
Resumo:
The main objective of carrying out this investigation is to develop suitable transducer array systems so that underwater pipeline inspection could be carried out in a much better way, a focused beam and electronic steering can reduce inspection time as well. Better results are obtained by optimizing the array parameters. The spacing between the elements is assumed to be half the wavelength so that the interelement interaction is minimum. For NDT applications these arrays are operated at MHz range. The wavelengths become very small in these frequency ranges. Then the size of the array elements becomes very small, requiring hybrid construction techniques for their fabrication. Transducer elements have been fabricated using PVDF as the active, mild steel as the backing and conducting silver preparation as the bonding materials. The transducer is operated in the (3,3) mode. The construction of a high frequency array is comparatively complicated. The interelement spacing between the transducer elements becomes considerably small when high frequencies are considered. It becomes very difficult to construct the transducer manually. The electrode connections to the elements can produce significant loading effect. The array has to be fabricated using hybrid construction techniques. The active materials has to be deposited on a proper substrate and etching techniques are required to fabricate the array. The annular ring, annular cylindrical or other similar structural forms of arrays may also find applications in the near future in treatments were curved contours of the human body are affected.
Resumo:
Sharing of information with those in need of it has always been an idealistic goal of networked environments. With the proliferation of computer networks, information is so widely distributed among systems, that it is imperative to have well-organized schemes for retrieval and also discovery. This thesis attempts to investigate the problems associated with such schemes and suggests a software architecture, which is aimed towards achieving a meaningful discovery. Usage of information elements as a modelling base for efficient information discovery in distributed systems is demonstrated with the aid of a novel conceptual entity called infotron.The investigations are focused on distributed systems and their associated problems. The study was directed towards identifying suitable software architecture and incorporating the same in an environment where information growth is phenomenal and a proper mechanism for carrying out information discovery becomes feasible. An empirical study undertaken with the aid of an election database of constituencies distributed geographically, provided the insights required. This is manifested in the Election Counting and Reporting Software (ECRS) System. ECRS system is a software system, which is essentially distributed in nature designed to prepare reports to district administrators about the election counting process and to generate other miscellaneous statutory reports.Most of the distributed systems of the nature of ECRS normally will possess a "fragile architecture" which would make them amenable to collapse, with the occurrence of minor faults. This is resolved with the help of the penta-tier architecture proposed, that contained five different technologies at different tiers of the architecture.The results of experiment conducted and its analysis show that such an architecture would help to maintain different components of the software intact in an impermeable manner from any internal or external faults. The architecture thus evolved needed a mechanism to support information processing and discovery. This necessitated the introduction of the noveI concept of infotrons. Further, when a computing machine has to perform any meaningful extraction of information, it is guided by what is termed an infotron dictionary.The other empirical study was to find out which of the two prominent markup languages namely HTML and XML, is best suited for the incorporation of infotrons. A comparative study of 200 documents in HTML and XML was undertaken. The result was in favor ofXML.The concept of infotron and that of infotron dictionary, which were developed, was applied to implement an Information Discovery System (IDS). IDS is essentially, a system, that starts with the infotron(s) supplied as clue(s), and results in brewing the information required to satisfy the need of the information discoverer by utilizing the documents available at its disposal (as information space). The various components of the system and their interaction follows the penta-tier architectural model and therefore can be considered fault-tolerant. IDS is generic in nature and therefore the characteristics and the specifications were drawn up accordingly. Many subsystems interacted with multiple infotron dictionaries that were maintained in the system.In order to demonstrate the working of the IDS and to discover the information without modification of a typical Library Information System (LIS), an Information Discovery in Library Information System (lDLIS) application was developed. IDLIS is essentially a wrapper for the LIS, which maintains all the databases of the library. The purpose was to demonstrate that the functionality of a legacy system could be enhanced with the augmentation of IDS leading to information discovery service. IDLIS demonstrates IDS in action. IDLIS proves that any legacy system could be augmented with IDS effectively to provide the additional functionality of information discovery service.Possible applications of IDS and scope for further research in the field are covered.
Resumo:
Measurement is the act or the result of a quantitative comparison between a given quantity and a quantity of the same kind chosen as a unit. It is generally agreed that all measurements contain errors. In a measuring system where both a measuring instrument and a human being taking the measurement using a preset process, the measurement error could be due to the instrument, the process or the human being involved. The first part of the study is devoted to understanding the human errors in measurement. For that, selected person related and selected work related factors that could affect measurement errors have been identified. Though these are well known, the exact extent of the error and the extent of effect of different factors on human errors in measurement are less reported. Characterization of human errors in measurement is done by conducting an experimental study using different subjects, where the factors were changed one at a time and the measurements made by them recorded. From the pre‐experiment survey research studies, it is observed that the respondents could not give the correct answers to questions related to the correct values [extent] of human related measurement errors. This confirmed the fears expressed regarding lack of knowledge about the extent of human related measurement errors among professionals associated with quality. But in postexperiment phase of survey study, it is observed that the answers regarding the extent of human related measurement errors has improved significantly since the answer choices were provided based on the experimental study. It is hoped that this work will help users of measurement in practice to better understand and manage the phenomena of human related errors in measurement.
Resumo:
Interfacings of various subjects generate new field ofstudy and research that help in advancing human knowledge. One of the latest of such fields is Neurotechnology, which is an effective amalgamation of neuroscience, physics, biomedical engineering and computational methods. Neurotechnology provides a platform to interact physicist; neurologist and engineers to break methodology and terminology related barriers. Advancements in Computational capability, wider scope of applications in nonlinear dynamics and chaos in complex systems enhanced study of neurodynamics. However there is a need for an effective dialogue among physicists, neurologists and engineers. Application of computer based technology in the field of medicine through signal and image processing, creation of clinical databases for helping clinicians etc are widely acknowledged. Such synergic effects between widely separated disciplines may help in enhancing the effectiveness of existing diagnostic methods. One of the recent methods in this direction is analysis of electroencephalogram with the help of methods in nonlinear dynamics. This thesis is an effort to understand the functional aspects of human brain by studying electroencephalogram. The algorithms and other related methods developed in the present work can be interfaced with a digital EEG machine to unfold the information hidden in the signal. Ultimately this can be used as a diagnostic tool.
Resumo:
Aeromonas spp. are ubiquitous aquatic organisms, associated with multitude of diseases in several species of animals, including fishes and humans. In the present study, water samples from two ornamental fish culture systems were analyzed for the presence of Aeromonas. Nutrient agar was used for Aeromonas isolation, and colonies (60 No) were identified through biochemical characterization. Seven clusters could be generated based on phenotypic characters, analyzed by the programme NTSYSpc, Version 2.02i, and identified as: Aeromonas caviae (33.3%), A. jandaei (38.3%) and A. veronii biovar sobria (28.3%). The strains isolated produced highly active hydrolytic enzymes, haemolytic activity and slime formation in varying proportions. The isolates were also tested for the enterotoxin genes (act, alt and ast), haemolytic toxins (hlyA and aerA), involved in type 3 secretion system (TTSS: ascV, aexT, aopP, aopO, ascF–ascG, and aopH), and glycerophospholipid-cholesterol acyltransferase (gcat). All isolates were found to be associated with at least one virulent gene. Moreover, they were resistant to frequently used antibiotics for human infections. The study demonstrates the pathogenic potential of Aeromonas, associated with ornamental fish culture systems suggesting the emerging threat to public health
Resumo:
Due to the emergence of multiple language support on the Internet, machine translation (MT) technologies are indispensable to the communication between speakers using different languages. Recent research works have started to explore tree-based machine translation systems with syntactical and morphological information. This work aims the development of Syntactic Based Machine Translation from English to Malayalam by adding different case information during translation. The system identifies general rules for various sentence patterns in English. These rules are generated using the Parts Of Speech (POS) tag information of the texts. Word Reordering based on the Syntax Tree is used to improve the translation quality of the system. The system used Bilingual English –Malayalam dictionary for translation.
Resumo:
Statistical Machine Translation (SMT) is one of the potential applications in the field of Natural Language Processing. The translation process in SMT is carried out by acquiring translation rules automatically from the parallel corpora. However, for many language pairs (e.g. Malayalam- English), they are available only in very limited quantities. Therefore, for these language pairs a huge portion of phrases encountered at run-time will be unknown. This paper focuses on methods for handling such out-of-vocabulary (OOV) words in Malayalam that cannot be translated to English using conventional phrase-based statistical machine translation systems. The OOV words in the source sentence are pre-processed to obtain the root word and its suffix. Different inflected forms of the OOV root are generated and a match is looked up for the word variants in the phrase translation table of the translation model. A Vocabulary filter is used to choose the best among the translations of these word variants by finding the unigram count. A match for the OOV suffix is also looked up in the phrase entries and the target translations are filtered out. Structuring of the filtered phrases is done and SMT translation model is extended by adding OOV with its new phrase translations. By the results of the manual evaluation done it is observed that amount of OOV words in the input has been reduced considerably
Resumo:
This paper presents the application of wavelet processing in the domain of handwritten character recognition. To attain high recognition rate, robust feature extractors and powerful classifiers that are invariant to degree of variability of human writing are needed. The proposed scheme consists of two stages: a feature extraction stage, which is based on Haar wavelet transform and a classification stage that uses support vector machine classifier. Experimental results show that the proposed method is effective
Resumo:
Software systems are progressively being deployed in many facets of human life. The implication of the failure of such systems, has an assorted impact on its customers. The fundamental aspect that supports a software system, is focus on quality. Reliability describes the ability of the system to function under specified environment for a specified period of time and is used to objectively measure the quality. Evaluation of reliability of a computing system involves computation of hardware and software reliability. Most of the earlier works were given focus on software reliability with no consideration for hardware parts or vice versa. However, a complete estimation of reliability of a computing system requires these two elements to be considered together, and thus demands a combined approach. The present work focuses on this and presents a model for evaluating the reliability of a computing system. The method involves identifying the failure data for hardware components, software components and building a model based on it, to predict the reliability. To develop such a model, focus is given to the systems based on Open Source Software, since there is an increasing trend towards its use and only a few studies were reported on the modeling and measurement of the reliability of such products. The present work includes a thorough study on the role of Free and Open Source Software, evaluation of reliability growth models, and is trying to present an integrated model for the prediction of reliability of a computational system. The developed model has been compared with existing models and its usefulness of is being discussed.