6 resultados para Software Package Data Exchange (SPDX)

em Cochin University of Science


Relevância:

100.00% 100.00%

Publicador:

Resumo:

The mathematical formulation of empirically developed formulas Jirr the calculation of the resonant frequency of a thick-substrate (h s 0.08151 A,,) microstrip antenna has been analyzed. With the use qt' tunnel-based artificial neural networks (ANNs), the resonant frequency of antennas with h satisfying the thick-substrate condition are calculated and compared with the existing experimental results and also with the simulation results obtained with the use of an IE3D software package. The artificial neural network results are in very good agreement with the experimental results

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This work presents an efficient method for volume rendering of glioma tumors from segmented 2D MRI Datasets with user interactive control, by replacing manual segmentation required in the state of art methods. The most common primary brain tumors are gliomas, evolving from the cerebral supportive cells. For clinical follow-up, the evaluation of the pre- operative tumor volume is essential. Tumor portions were automatically segmented from 2D MR images using morphological filtering techniques. These seg- mented tumor slices were propagated and modeled with the software package. The 3D modeled tumor consists of gray level values of the original image with exact tumor boundary. Axial slices of FLAIR and T2 weighted images were used for extracting tumors. Volumetric assessment of tumor volume with manual segmentation of its outlines is a time-consuming proc- ess and is prone to error. These defects are overcome in this method. Authors verified the performance of our method on several sets of MRI scans. The 3D modeling was also done using segmented 2D slices with the help of a medical software package called 3D DOCTOR for verification purposes. The results were validated with the ground truth models by the Radi- ologist.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Information communication technology (IC T) has invariably brought about fundamental changes in the way in which libraries gather. preserve and disseminate information. The study was carried out with an aim to estimate and compare the information seeking behaviour (ISB) of the academics of two prominent universities of Kerala in the context of advancements achieved through ICT. The study was motivated by the fast changing scenario of libraries with the proliferation of many high tech products and services. The main purpose of the study was to identify the chief source of information of the academics, and also to examine academics preference upon the form and format of information source. The study also tries to estimate the adequacy of the resources and services currently provided by the libraries.The questionnaire was the central instrument for data collection. An almost census method was adopted for data collection engaging various methods and tools for eliciting data.The total population of the study was 957, out of which questionnaire was distributed to 859 academics. 646 academics responded to the survey, of which 564 of them were sound responses. Data was coded and analysed using Statistical Package for Social Sciences (SPSS) software and also with the help of Microsofl Excel package. Various statistical techniques were engaged to analyse data. A paradigm shift is evident by the fact that academies push themselves towards information in internet i.e. they prefer electronic source to traditional source and the very shift is coupled itself with e-seeking of information. The study reveals that ISB of the academics is influenced priman'ly by personal factors and comparative analysis shows that the ISB ofthc academics is similar in both universities. The productivity of the academics was tested to dig up any relation with respect to their ISB, and it is found that productivity of the academics is extensively related with their ISB. Study also reveals that the users ofthe library are satisfied with the services provided but not with the sources and in conjunction, study also recommends ways and means to improve the existing library system.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Professor Irma Glicman Adelman, an Irish Economist working in California University at Berkely, in her research work on ‘Development Over Two Centuries’, which is published in the Journal of Evolutionary Economics, 1995, has identified that India, along with China, would be one of the largest economies in this 21st Century. She has stated that the period 1700 - 1820 is the period of Netherlands, the period 1820 - 1890 is the period of England the period 1890 - 2000 is the period of America and this 21st Century is the century of China and India. World Bank has also identified India as one of the leading players of this century after China. India will be third largest economy after USA and China. India will challenge the Global Economic Order in the next 15 years. India will overtake Italian economy in 2015, England economy in 2020, Japan economy in 2025 and USA economy in 2050 (China will overtake Japan economy in 2016 and USA economy in 2027). India has the following advantages compared with other economies. India is 4th largest GDP in the world in terms of Purchasing Power. India is third fastest growing economy in the world after China and Vietnam. Service sector contributes around 57% of GDP. The share of agriculture is around 17% and Manufacture is 16% in 2005 - 2006. This is a character of a developed country. Expected GDP growth rate is 10% shortly (It has come down from 9.2% in 2006 - 2007 to 6.2% during 2008 - 2009 due to recession. It is only a temporary phenomenon). India has $284 billion as Foreign Exchange Reserve as on today. India had just $1 billion as Foreign Exchange Reserve when it opened its economy in the year 1991. In this research paper an attempt has been made to study the two booming economies of the globe with respect to their foreign exchange reserves. This study mainly based on secondary data published by respective governments and various studies done on this area

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Bank switching in embedded processors having partitioned memory architecture results in code size as well as run time overhead. An algorithm and its application to assist the compiler in eliminating the redundant bank switching codes introduced and deciding the optimum data allocation to banked memory is presented in this work. A relation matrix formed for the memory bank state transition corresponding to each bank selection instruction is used for the detection of redundant codes. Data allocation to memory is done by considering all possible permutation of memory banks and combination of data. The compiler output corresponding to each data mapping scheme is subjected to a static machine code analysis which identifies the one with minimum number of bank switching codes. Even though the method is compiler independent, the algorithm utilizes certain architectural features of the target processor. A prototype based on PIC 16F87X microcontrollers is described. This method scales well into larger number of memory blocks and other architectures so that high performance compilers can integrate this technique for efficient code generation. The technique is illustrated with an example

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Software systems are progressively being deployed in many facets of human life. The implication of the failure of such systems, has an assorted impact on its customers. The fundamental aspect that supports a software system, is focus on quality. Reliability describes the ability of the system to function under specified environment for a specified period of time and is used to objectively measure the quality. Evaluation of reliability of a computing system involves computation of hardware and software reliability. Most of the earlier works were given focus on software reliability with no consideration for hardware parts or vice versa. However, a complete estimation of reliability of a computing system requires these two elements to be considered together, and thus demands a combined approach. The present work focuses on this and presents a model for evaluating the reliability of a computing system. The method involves identifying the failure data for hardware components, software components and building a model based on it, to predict the reliability. To develop such a model, focus is given to the systems based on Open Source Software, since there is an increasing trend towards its use and only a few studies were reported on the modeling and measurement of the reliability of such products. The present work includes a thorough study on the role of Free and Open Source Software, evaluation of reliability growth models, and is trying to present an integrated model for the prediction of reliability of a computational system. The developed model has been compared with existing models and its usefulness of is being discussed.