890 resultados para real option analysis
Resumo:
Computational Biology is the research are that contributes to the analysis of biological data through the development of algorithms which will address significant research problems.The data from molecular biology includes DNA,RNA ,Protein and Gene expression data.Gene Expression Data provides the expression level of genes under different conditions.Gene expression is the process of transcribing the DNA sequence of a gene into mRNA sequences which in turn are later translated into proteins.The number of copies of mRNA produced is called the expression level of a gene.Gene expression data is organized in the form of a matrix. Rows in the matrix represent genes and columns in the matrix represent experimental conditions.Experimental conditions can be different tissue types or time points.Entries in the gene expression matrix are real values.Through the analysis of gene expression data it is possible to determine the behavioral patterns of genes such as similarity of their behavior,nature of their interaction,their respective contribution to the same pathways and so on. Similar expression patterns are exhibited by the genes participating in the same biological process.These patterns have immense relevance and application in bioinformatics and clinical research.Theses patterns are used in the medical domain for aid in more accurate diagnosis,prognosis,treatment planning.drug discovery and protein network analysis.To identify various patterns from gene expression data,data mining techniques are essential.Clustering is an important data mining technique for the analysis of gene expression data.To overcome the problems associated with clustering,biclustering is introduced.Biclustering refers to simultaneous clustering of both rows and columns of a data matrix. Clustering is a global whereas biclustering is a local model.Discovering local expression patterns is essential for identfying many genetic pathways that are not apparent otherwise.It is therefore necessary to move beyond the clustering paradigm towards developing approaches which are capable of discovering local patterns in gene expression data.A biclusters is a submatrix of the gene expression data matrix.The rows and columns in the submatrix need not be contiguous as in the gene expression data matrix.Biclusters are not disjoint.Computation of biclusters is costly because one will have to consider all the combinations of columans and rows in order to find out all the biclusters.The search space for the biclustering problem is 2 m+n where m and n are the number of genes and conditions respectively.Usually m+n is more than 3000.The biclustering problem is NP-hard.Biclustering is a powerful analytical tool for the biologist.The research reported in this thesis addresses the problem of biclustering.Ten algorithms are developed for the identification of coherent biclusters from gene expression data.All these algorithms are making use of a measure called mean squared residue to search for biclusters.The objective here is to identify the biclusters of maximum size with the mean squared residue lower than a given threshold. All these algorithms begin the search from tightly coregulated submatrices called the seeds.These seeds are generated by K-Means clustering algorithm.The algorithms developed can be classified as constraint based,greedy and metaheuristic.Constarint based algorithms uses one or more of the various constaints namely the MSR threshold and the MSR difference threshold.The greedy approach makes a locally optimal choice at each stage with the objective of finding the global optimum.In metaheuristic approaches particle Swarm Optimization(PSO) and variants of Greedy Randomized Adaptive Search Procedure(GRASP) are used for the identification of biclusters.These algorithms are implemented on the Yeast and Lymphoma datasets.Biologically relevant and statistically significant biclusters are identified by all these algorithms which are validated by Gene Ontology database.All these algorithms are compared with some other biclustering algorithms.Algorithms developed in this work overcome some of the problems associated with the already existing algorithms.With the help of some of the algorithms which are developed in this work biclusters with very high row variance,which is higher than the row variance of any other algorithm using mean squared residue, are identified from both Yeast and Lymphoma data sets.Such biclusters which make significant change in the expression level are highly relevant biologically.
Resumo:
This thesis Entitled “modelling and analysis of recurrent event data with multiple causes.Survival data is a term used for describing data that measures the time to occurrence of an event.In survival studies, the time to occurrence of an event is generally referred to as lifetime.Recurrent event data are commonly encountered in longitudinal studies when individuals are followed to observe the repeated occurrences of certain events. In many practical situations, individuals under study are exposed to the failure due to more than one causes and the eventual failure can be attributed to exactly one of these causes.The proposed model was useful in real life situations to study the effect of covariates on recurrences of certain events due to different causes.In Chapter 3, an additive hazards model for gap time distributions of recurrent event data with multiple causes was introduced. The parameter estimation and asymptotic properties were discussed .In Chapter 4, a shared frailty model for the analysis of bivariate competing risks data was presented and the estimation procedures for shared gamma frailty model, without covariates and with covariates, using EM algorithm were discussed. In Chapter 6, two nonparametric estimators for bivariate survivor function of paired recurrent event data were developed. The asymptotic properties of the estimators were studied. The proposed estimators were applied to a real life data set. Simulation studies were carried out to find the efficiency of the proposed estimators.
Resumo:
This thesis analyses certain problems in Inventories and Queues. There are many situations in real-life where we encounter models as described in this thesis. It analyses in depth various models which can be applied to production, storag¢, telephone traffic, road traffic, economics, business administration, serving of customers, operations of particle counters and others. Certain models described here is not a complete representation of the true situation in all its complexity, but a simplified version amenable to analysis. While discussing the models, we show how a dependence structure can be suitably introduced in some problems of Inventories and Queues. Continuous review, single commodity inventory systems with Markov dependence structure introduced in the demand quantities, replenishment quantities and reordering levels are considered separately. Lead time is assumed to be zero in these models. An inventory model involving random lead time is also considered (Chapter-4). Further finite capacity single server queueing systems with single/bulk arrival, single/bulk services are also discussed. In some models the server is assumed to go on vacation (Chapters 7 and 8). In chapters 5 and 6 a sort of dependence is introduced in the service pattern in some queuing models.
Resumo:
This paper presents a performance analysis of reversible, fault tolerant VLSI implementations of carry select and hybrid decimal adders suitable for multi-digit BCD addition. The designs enable partial parallel processing of all digits that perform high-speed addition in decimal domain. When the number of digits is more than 25 the hybrid decimal adder can operate 5 times faster than conventional decimal adder using classical logic gates. The speed up factor of hybrid adder increases above 10 when the number of decimal digits is more than 25 for reversible logic implementation. Such highspeed decimal adders find applications in real time processors and internet-based applications. The implementations use only reversible conservative Fredkin gates, which make it suitable for VLSI circuits.
Resumo:
A spectral angle based feature extraction method, Spectral Clustering Independent Component Analysis (SC-ICA), is proposed in this work to improve the brain tissue classification from Magnetic Resonance Images (MRI). SC-ICA provides equal priority to global and local features; thereby it tries to resolve the inefficiency of conventional approaches in abnormal tissue extraction. First, input multispectral MRI is divided into different clusters by a spectral distance based clustering. Then, Independent Component Analysis (ICA) is applied on the clustered data, in conjunction with Support Vector Machines (SVM) for brain tissue analysis. Normal and abnormal datasets, consisting of real and synthetic T1-weighted, T2-weighted and proton density/fluid-attenuated inversion recovery images, were used to evaluate the performance of the new method. Comparative analysis with ICA based SVM and other conventional classifiers established the stability and efficiency of SC-ICA based classification, especially in reproduction of small abnormalities. Clinical abnormal case analysis demonstrated it through the highest Tanimoto Index/accuracy values, 0.75/98.8%, observed against ICA based SVM results, 0.17/96.1%, for reproduced lesions. Experimental results recommend the proposed method as a promising approach in clinical and pathological studies of brain diseases
Resumo:
Multispectral analysis is a promising approach in tissue classification and abnormality detection from Magnetic Resonance (MR) images. But instability in accuracy and reproducibility of the classification results from conventional techniques keeps it far from clinical applications. Recent studies proposed Independent Component Analysis (ICA) as an effective method for source signals separation from multispectral MR data. However, it often fails to extract the local features like small abnormalities, especially from dependent real data. A multisignal wavelet analysis prior to ICA is proposed in this work to resolve these issues. Best de-correlated detail coefficients are combined with input images to give better classification results. Performance improvement of the proposed method over conventional ICA is effectively demonstrated by segmentation and classification using k-means clustering. Experimental results from synthetic and real data strongly confirm the positive effect of the new method with an improved Tanimoto index/Sensitivity values, 0.884/93.605, for reproduced small white matter lesions
Resumo:
Soils are multiphase materials comprised of mineral grains, air voids and water. Soils are not linearly elastic or perfectly plastic for external loading. Various constitutive models are available to describe the various aspects of soil behaviour. But no single soil model can completely describe the behaviour of real soil under all conditions. This paper attempts to compare various soil models and suggest a suitable model for the Soil Structure Interaction analysis especially for Kochi marine clay.
Resumo:
Pedicle screw insertion technique has made revolution in the surgical treatment of spinal fractures and spinal disorders. Although X- ray fluoroscopy based navigation is popular, there is risk of prolonged exposure to X- ray radiation. Systems that have lower radiation risk are generally quite expensive. The position and orientation of the drill is clinically very important in pedicle screw fixation. In this paper, the position and orientation of the marker on the drill is determined using pattern recognition based methods, using geometric features, obtained from the input video sequence taken from CCD camera. A search is then performed on the video frames after preprocessing, to obtain the exact position and orientation of the drill. An animated graphics, showing the instantaneous position and orientation of the drill is then overlaid on the processed video for real time drill control and navigation
Resumo:
The inferences obtained from the study are presented in coherent area-specific levels so as to understand the ecotourism and its sub-sector areas for the researchers and policy makers about the issues, importances and potentialities of the sector. An analysis of the tourism sector in Kerala has shown tremendous growth both in terms of tourist arrivals and in terms of revenue generation from direct and indirect sources. The foreign tourist visitors in Kerala in 2014 was 9,23,336 which shows 7.60 percent increase from the last year and the domestic tourist visitors were 1,16,95,411 which again shows 7.71 percent increase, is a clear evidence of its potential. In 2014 the industry contributed revenue of 24885.44 crores from direct and indirect sources giving rise to an increase of 12.11 percent from the last year. A dichotomy of tourists and ecotourists shows that tourists in the ecotourism destinations come to 42.6 percent of the total, shows the scope, significance and its potential. Correlation of zone-wise tourist arrivals based on the ecotourism destinations highlights the fact that with only 19 of the 64 destinations that come in the central zone are the most preferred centres (around 54 percent) for the domestic as well as foreign tourists. The north zone encompassing 6 districts with rich biodiversity shows that the tourists‟ arrival patterns exhibit less promising results. Though the north zone has 31 ecotourism destinations of the state receives only 6.19 percent of the foreign visitors. The ecotourism activities in the state are primarily managed by the Eco-Development Committees (EDCs) and the Vana Samrakshana Samithies (VSS) under the Forest Development Agency of Kerala. Social class-wise categorization of membership shows that 13142 families have membership in 190 EDCs with SC (28 percent), ST (33 percent) and other marginalised communities (39 percent). But this in the VSS shows that 400 VSS have 59085 members actively engaged in ecotourism activities and social category of the VSS makes clear that majority are from the other marginalized fringe households with 62 percent where as the participation of SC is 12 percent and ST is 26 percent. An evaluation of the socio-economic and demographic matrix of the community members involved in ecotourism activities brings out region specific differences. About 75.70 percent of the respondents are males and the rest are females. Majority of the respondents (about 60 percent) are in the age group of 20 to 40 years, followed by the age group of 40-50 (20 percent). The average age of respondents in the three zones is between 35 and 37 years. The majority of the respondents are married, a few are unmarried. Average family size is 4-5 members and differences are identified among zones. Average number of adults per household is 3 and child per household is 2. Majority have an education of 10th class and below i.e. about 60 percent of the sample have only basic school education like primary, secondary and high school (i.e. up to SSLC but not passed) level. About 18 percent are SSLC passed, 10 percent are undergraduates whereas 6 percent constitute respondents having qualification of graduation and above. Majority of the „graduates and above‟ are from south and central zone. Inter-zone differences in educational profile are also identified with lesser number of „graduates and above‟ are identified in the north zone compared to the other two zones. Investigating into the income and livelihood options of the respondents gives insight about the prominence of ecotourism as an employment and livelihood option for the community members, as more than 90 percent of the respondents have cited tourism sector as their main employment option. Most (49.30 percent) of respondents get 100 percent income from tourism related activities, followed by 37.30 percent of community members have income between 75-99 percent from tourism whereas the rest (13 percent) have less than 74 percent of their income from tourism and there exists difference between zones and percentage of income. Financial habit shows that about 49.7 percent hold active bank accounts, 61 percent have savings behaviour and 73.8 percent have indebtedness. Analysis about the ownership of house brings to light that 37 percent of respondents live in their own house followed by 25.7 percent in government funded/provided house and 21 percent in their parent‟s house and 3.5 percent in rented house. About 12 percent of the respondents have other kinds of accommodation facilities such as staff quarters, etc. But in the case of north zone majority i.e. 52 percent primarily depend on the government funded house indicating the effectiveness of government housing programme. Standard of living measured in SLI frameworks shows that majority of the respondents have medium SLI values (42.3 percent); the remaining 47.7 percent have low SLI and 10 percent have high SLI. The community members have been benefitted immensely from forest and its resources. Since the ecotourism destinations are located amidst the wildlife settings, majority of them depend on forest for their livelihood. The information on the tourist‟s demographic characteristics like age, sex, educational qualification and annual income show that the age category of domestic and foreign tourists falls below the age group of less than 35 years (about 65 percent), whereas only 16 percent of tourists are aged above 46 years. The age group below 25 years consists of more international tourists (31.3 percent) compared to the proportion of domestic tourists (12.5 percent). Male-female ratio shows that the males constitute 56 percent of the sample and females with 44 percent. The factors determining the impact of ecotourism programmes in the community was evaluated with the aid of a factor analysis with 12 selected statements. The worries and concerns of the community members about the impact of ecotourism on the environment are well understood from this analysis. It can be drawn that environment protection and the role of ecotourism in improving the income and livelihood options of the local communities is the most important factor concerning the community members.
Resumo:
The Kerala model of development mostly bypassed the fishing community, as the fishers form the main miserable groups with respect to many of the socio-economic and quality of life indicators. Modernization drive in the fishing sector paradoxically turns to marginalization drives as far as the traditional fishers in Kerala are concerned. Subsequent management and resource recuperation drives too seemed to be detrimental to the local fishing community. Though SHGs and cooperatives had helped in overcoming many of the maladies in most of the sectors in Kerala in terms of livelihood and employment in the 1980s, the fishing sector by that time had been moving ahead with mechanization and export euphoria and hence it bypassed the fishing sector. Though it has not helped the fishing sector in the initial stages, but because of necessity, it soon has become a vibrant livelihood and employment force in the coastal economy of Kerala. Initial success made it to link this with the governmental cooperative set up and soon SHGs and Cooperatives become reinforcing forces for the inclusive development of the real fishers.The fisheries sector in Kerala has undergone drastic changes with the advent of globalised economy. The traditional fisher folk are one of the most marginalized communities in the state and are left out of the overall development process mainly due to the marginalization of this community both in the sea and in the market due to modernization and mechanization of the sector. Mechanization opened up the sector a great deal as it began to attract people belonging to non-fishing community as moneylenders, boat owners, employers and middle men which often resulted in conflicts between traditional and mechanized fishermen. These factors, together with resource depletion resulted in the backwardness experienced by the traditional fishermen compared to other communities who were reaping the benefits of the overall development scenario.The studies detailing the activities and achievements of fisher folks via Self Help Groups (SHGs) and the cooperative movement in coastal Kerala are scant. The SHGs through cooperatives have been effective in livelihood security, poverty alleviation and inclusive development of the fisher folk (Rajasenan and Rajeev, 2012). The SHGs have a greater role to play as estimated fall in demand for marine products in international markets, which may result in reduction of employment opportunities in fish processing, peeling, etc. Also, technological advancement has made them unskilled to work in this sector making them outliers in the overall development process resulting in poor quality of physical and social infrastructure. Hence, it is all the more important to derive a strategy and best practice methods for the effective functioning of these SHGs so that the
Resumo:
In connection with the (revived) demand for considering applications in the teaching of mathematics, various schemata or lists of criteria have been developed since the end of the sixties, which set up requirements about closeness to the real world or about the type of mathematics being used, and which have made it possible to analyze the available applications in their light. After having stated the problem (in section 1), we present (in section 2) a sketch of some of the best known of these and of some earlier schemata, although we are not aiming for a complete picture. Then (in section 3) we distinguish among different dimensions.in the analysis of applications. With this as a basis, we develop (in section 4) our own suggestion for categorizing types of applications and conceptions for an application-oriented mathematics instruction. Then (in section 5) we illustrate our schemata by some examples of performed evaluations. Finally (in section 6), we present some preliminary first results of the analysis of teaching conceptions.
Resumo:
Die zunehmende Vernetzung der Informations- und Kommunikationssysteme führt zu einer weiteren Erhöhung der Komplexität und damit auch zu einer weiteren Zunahme von Sicherheitslücken. Klassische Schutzmechanismen wie Firewall-Systeme und Anti-Malware-Lösungen bieten schon lange keinen Schutz mehr vor Eindringversuchen in IT-Infrastrukturen. Als ein sehr wirkungsvolles Instrument zum Schutz gegenüber Cyber-Attacken haben sich hierbei die Intrusion Detection Systeme (IDS) etabliert. Solche Systeme sammeln und analysieren Informationen von Netzwerkkomponenten und Rechnern, um ungewöhnliches Verhalten und Sicherheitsverletzungen automatisiert festzustellen. Während signatur-basierte Ansätze nur bereits bekannte Angriffsmuster detektieren können, sind anomalie-basierte IDS auch in der Lage, neue bisher unbekannte Angriffe (Zero-Day-Attacks) frühzeitig zu erkennen. Das Kernproblem von Intrusion Detection Systeme besteht jedoch in der optimalen Verarbeitung der gewaltigen Netzdaten und der Entwicklung eines in Echtzeit arbeitenden adaptiven Erkennungsmodells. Um diese Herausforderungen lösen zu können, stellt diese Dissertation ein Framework bereit, das aus zwei Hauptteilen besteht. Der erste Teil, OptiFilter genannt, verwendet ein dynamisches "Queuing Concept", um die zahlreich anfallenden Netzdaten weiter zu verarbeiten, baut fortlaufend Netzverbindungen auf, und exportiert strukturierte Input-Daten für das IDS. Den zweiten Teil stellt ein adaptiver Klassifikator dar, der ein Klassifikator-Modell basierend auf "Enhanced Growing Hierarchical Self Organizing Map" (EGHSOM), ein Modell für Netzwerk Normalzustand (NNB) und ein "Update Model" umfasst. In dem OptiFilter werden Tcpdump und SNMP traps benutzt, um die Netzwerkpakete und Hostereignisse fortlaufend zu aggregieren. Diese aggregierten Netzwerkpackete und Hostereignisse werden weiter analysiert und in Verbindungsvektoren umgewandelt. Zur Verbesserung der Erkennungsrate des adaptiven Klassifikators wird das künstliche neuronale Netz GHSOM intensiv untersucht und wesentlich weiterentwickelt. In dieser Dissertation werden unterschiedliche Ansätze vorgeschlagen und diskutiert. So wird eine classification-confidence margin threshold definiert, um die unbekannten bösartigen Verbindungen aufzudecken, die Stabilität der Wachstumstopologie durch neuartige Ansätze für die Initialisierung der Gewichtvektoren und durch die Stärkung der Winner Neuronen erhöht, und ein selbst-adaptives Verfahren eingeführt, um das Modell ständig aktualisieren zu können. Darüber hinaus besteht die Hauptaufgabe des NNB-Modells in der weiteren Untersuchung der erkannten unbekannten Verbindungen von der EGHSOM und der Überprüfung, ob sie normal sind. Jedoch, ändern sich die Netzverkehrsdaten wegen des Concept drif Phänomens ständig, was in Echtzeit zur Erzeugung nicht stationärer Netzdaten führt. Dieses Phänomen wird von dem Update-Modell besser kontrolliert. Das EGHSOM-Modell kann die neuen Anomalien effektiv erkennen und das NNB-Model passt die Änderungen in Netzdaten optimal an. Bei den experimentellen Untersuchungen hat das Framework erfolgversprechende Ergebnisse gezeigt. Im ersten Experiment wurde das Framework in Offline-Betriebsmodus evaluiert. Der OptiFilter wurde mit offline-, synthetischen- und realistischen Daten ausgewertet. Der adaptive Klassifikator wurde mit dem 10-Fold Cross Validation Verfahren evaluiert, um dessen Genauigkeit abzuschätzen. Im zweiten Experiment wurde das Framework auf einer 1 bis 10 GB Netzwerkstrecke installiert und im Online-Betriebsmodus in Echtzeit ausgewertet. Der OptiFilter hat erfolgreich die gewaltige Menge von Netzdaten in die strukturierten Verbindungsvektoren umgewandelt und der adaptive Klassifikator hat sie präzise klassifiziert. Die Vergleichsstudie zwischen dem entwickelten Framework und anderen bekannten IDS-Ansätzen zeigt, dass der vorgeschlagene IDSFramework alle anderen Ansätze übertrifft. Dies lässt sich auf folgende Kernpunkte zurückführen: Bearbeitung der gesammelten Netzdaten, Erreichung der besten Performanz (wie die Gesamtgenauigkeit), Detektieren unbekannter Verbindungen und Entwicklung des in Echtzeit arbeitenden Erkennungsmodells von Eindringversuchen.
Resumo:
Fueled by ever-growing genomic information and rapid developments of proteomics–the large scale analysis of proteins and mapping its functional role has become one of the most important disciplines for characterizing complex cell function. For building functional linkages between the biomolecules, and for providing insight into the mechanisms of biological processes, last decade witnessed the exploration of combinatorial and chip technology for the detection of bimolecules in a high throughput and spatially addressable fashion. Among the various techniques developed, the protein chip technology has been rapid. Recently we demonstrated a new platform called “Spacially addressable protein array” (SAPA) to profile the ligand receptor interactions. To optimize the platform, the present study investigated various parameters such as the surface chemistry and role of additives for achieving high density and high-throughput detection with minimal nonspecific protein adsorption. In summary the present poster will address some of the critical challenges in protein micro array technology and the process of fine tuning to achieve the optimum system for solving real biological problems.
Resumo:
One of the tantalising remaining problems in compositional data analysis lies in how to deal with data sets in which there are components which are essential zeros. By an essential zero we mean a component which is truly zero, not something recorded as zero simply because the experimental design or the measuring instrument has not been sufficiently sensitive to detect a trace of the part. Such essential zeros occur in many compositional situations, such as household budget patterns, time budgets, palaeontological zonation studies, ecological abundance studies. Devices such as nonzero replacement and amalgamation are almost invariably ad hoc and unsuccessful in such situations. From consideration of such examples it seems sensible to build up a model in two stages, the first determining where the zeros will occur and the second how the unit available is distributed among the non-zero parts. In this paper we suggest two such models, an independent binomial conditional logistic normal model and a hierarchical dependent binomial conditional logistic normal model. The compositional data in such modelling consist of an incidence matrix and a conditional compositional matrix. Interesting statistical problems arise, such as the question of estimability of parameters, the nature of the computational process for the estimation of both the incidence and compositional parameters caused by the complexity of the subcompositional structure, the formation of meaningful hypotheses, and the devising of suitable testing methodology within a lattice of such essential zero-compositional hypotheses. The methodology is illustrated by application to both simulated and real compositional data
Resumo:
One of the disadvantages of old age is that there is more past than future: this, however, may be turned into an advantage if the wealth of experience and, hopefully, wisdom gained in the past can be reflected upon and throw some light on possible future trends. To an extent, then, this talk is necessarily personal, certainly nostalgic, but also self critical and inquisitive about our understanding of the discipline of statistics. A number of almost philosophical themes will run through the talk: search for appropriate modelling in relation to the real problem envisaged, emphasis on sensible balances between simplicity and complexity, the relative roles of theory and practice, the nature of communication of inferential ideas to the statistical layman, the inter-related roles of teaching, consultation and research. A list of keywords might be: identification of sample space and its mathematical structure, choices between transform and stay, the role of parametric modelling, the role of a sample space metric, the underused hypothesis lattice, the nature of compositional change, particularly in relation to the modelling of processes. While the main theme will be relevance to compositional data analysis we shall point to substantial implications for general multivariate analysis arising from experience of the development of compositional data analysis…