795 resultados para Inquiry based teaching of mathematics


Relevância:

100.00% 100.00%

Publicador:

Resumo:

The wealth of information available freely on the web and medical image databases poses a major problem for the end users: how to find the information needed? Content –Based Image Retrieval is the obvious solution.A standard called MPEG-7 was evolved to address the interoperability issues of content-based search.The work presented in this thesis mainly concentrates on developing new shape descriptors and a framework for content – based retrieval of scoliosis images.New region-based and contour based shape descriptor is developed based on orthogonal Legendre polymomials.A novel system for indexing and retrieval of digital spine radiographs with scoliosis is presented.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Timely detection of sudden change in dynamics that adversely affect the performance of systems and quality of products has great scientific relevance. This work focuses on effective detection of dynamical changes of real time signals from mechanical as well as biological systems using a fast and robust technique of permutation entropy (PE). The results are used in detecting chatter onset in machine turning and identifying vocal disorders from speech signal.Permutation Entropy is a nonlinear complexity measure which can efficiently distinguish regular and complex nature of any signal and extract information about the change in dynamics of the process by indicating sudden change in its value. Here we propose the use of permutation entropy (PE), to detect the dynamical changes in two non linear processes, turning under mechanical system and speech under biological system.Effectiveness of PE in detecting the change in dynamics in turning process from the time series generated with samples of audio and current signals is studied. Experiments are carried out on a lathe machine for sudden increase in depth of cut and continuous increase in depth of cut on mild steel work pieces keeping the speed and feed rate constant. The results are applied to detect chatter onset in machining. These results are verified using frequency spectra of the signals and the non linear measure, normalized coarse-grained information rate (NCIR).PE analysis is carried out to investigate the variation in surface texture caused by chatter on the machined work piece. Statistical parameter from the optical grey level intensity histogram of laser speckle pattern recorded using a charge coupled device (CCD) camera is used to generate the time series required for PE analysis. Standard optical roughness parameter is used to confirm the results.Application of PE in identifying the vocal disorders is studied from speech signal recorded using microphone. Here analysis is carried out using speech signals of subjects with different pathological conditions and normal subjects, and the results are used for identifying vocal disorders. Standard linear technique of FFT is used to substantiate thc results.The results of PE analysis in all three cases clearly indicate that this complexity measure is sensitive to change in regularity of a signal and hence can suitably be used for detection of dynamical changes in real world systems. This work establishes the application of the simple, inexpensive and fast algorithm of PE for the benefit of advanced manufacturing process as well as clinical diagnosis in vocal disorders.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Reliability analysis is a well established branch of statistics that deals with the statistical study of different aspects of lifetimes of a system of components. As we pointed out earlier that major part of the theory and applications in connection with reliability analysis were discussed based on the measures in terms of distribution function. In the beginning chapters of the thesis, we have described some attractive features of quantile functions and the relevance of its use in reliability analysis. Motivated by the works of Parzen (1979), Freimer et al. (1988) and Gilchrist (2000), who indicated the scope of quantile functions in reliability analysis and as a follow up of the systematic study in this connection by Nair and Sankaran (2009), in the present work we tried to extend their ideas to develop necessary theoretical framework for lifetime data analysis. In Chapter 1, we have given the relevance and scope of the study and a brief outline of the work we have carried out. Chapter 2 of this thesis is devoted to the presentation of various concepts and their brief reviews, which were useful for the discussions in the subsequent chapters .In the introduction of Chapter 4, we have pointed out the role of ageing concepts in reliability analysis and in identifying life distributions .In Chapter 6, we have studied the first two L-moments of residual life and their relevance in various applications of reliability analysis. We have shown that the first L-moment of residual function is equivalent to the vitality function, which have been widely discussed in the literature .In Chapter 7, we have defined percentile residual life in reversed time (RPRL) and derived its relationship with reversed hazard rate (RHR). We have discussed the characterization problem of RPRL and demonstrated with an example that the RPRL for given does not determine the distribution uniquely

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Image processing has been a challenging and multidisciplinary research area since decades with continuing improvements in its various branches especially Medical Imaging. The healthcare industry was very much benefited with the advances in Image Processing techniques for the efficient management of large volumes of clinical data. The popularity and growth of Image Processing field attracts researchers from many disciplines including Computer Science and Medical Science due to its applicability to the real world. In the meantime, Computer Science is becoming an important driving force for the further development of Medical Sciences. The objective of this study is to make use of the basic concepts in Medical Image Processing and develop methods and tools for clinicians’ assistance. This work is motivated from clinical applications of digital mammograms and placental sonograms, and uses real medical images for proposing a method intended to assist radiologists in the diagnostic process. The study consists of two domains of Pattern recognition, Classification and Content Based Retrieval. Mammogram images of breast cancer patients and placental images are used for this study. Cancer is a disaster to human race. The accuracy in characterizing images using simplified user friendly Computer Aided Diagnosis techniques helps radiologists in detecting cancers at an early stage. Breast cancer which accounts for the major cause of cancer death in women can be fully cured if detected at an early stage. Studies relating to placental characteristics and abnormalities are important in foetal monitoring. The diagnostic variability in sonographic examination of placenta can be overlooked by detailed placental texture analysis by focusing on placental grading. The work aims on early breast cancer detection and placental maturity analysis. This dissertation is a stepping stone in combing various application domains of healthcare and technology.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Routine activity theory introduced by Cohen& Felson in 1979 states that criminal acts are caused due to the presenceof criminals, vic-timsand the absence of guardians in time and place. As the number of collision of these elements in place and time increases, criminal acts will also increase even if the number of criminals or civilians remains the same within the vicinity of a city. Street robbery is a typical example of routine ac-tivity theory and the occurrence of which can be predicted using routine activity theory. Agent-based models allow simulation of diversity among individuals. Therefore agent based simulation of street robbery can be used to visualize how chronological aspects of human activity influence the incidence of street robbery.The conceptual model identifies three classes of people-criminals, civilians and police with certain activity areas for each. Police exist only as agents of formal guardianship. Criminals with a tendency for crime will be in the search for their victims. Civilians without criminal tendencycan be either victims or guardians. In addition to criminal tendency, each civilian in the model has a unique set of characteristicslike wealth, employment status, ability for guardianship etc. These agents are subjected to random walk through a street environment guided by a Q –learning module and the possible outcomes are analyzed

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Over-sampling sigma-delta analogue-to-digital converters (ADCs) are one of the key building blocks of state of the art wireless transceivers. In the sigma-delta modulator design the scaling coefficients determine the overall signal-to-noise ratio. Therefore, selecting the optimum value of the coefficient is very important. To this end, this paper addresses the design of a fourthorder multi-bit sigma-delta modulator for Wireless Local Area Networks (WLAN) receiver with feed-forward path and the optimum coefficients are selected using genetic algorithm (GA)- based search method. In particular, the proposed converter makes use of low-distortion swing suppression SDM architecture which is highly suitable for low oversampling ratios to attain high linearity over a wide bandwidth. The focus of this paper is the identification of the best coefficients suitable for the proposed topology as well as the optimization of a set of system parameters in order to achieve the desired signal-to-noise ratio. GA-based search engine is a stochastic search method which can find the optimum solution within the given constraints.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Cancer treatment is most effective when it is detected early and the progress in treatment will be closely related to the ability to reduce the proportion of misses in the cancer detection task. The effectiveness of algorithms for detecting cancers can be greatly increased if these algorithms work synergistically with those for characterizing normal mammograms. This research work combines computerized image analysis techniques and neural networks to separate out some fraction of the normal mammograms with extremely high reliability, based on normal tissue identification and removal. The presence of clustered microcalcifications is one of the most important and sometimes the only sign of cancer on a mammogram. 60% to 70% of non-palpable breast carcinoma demonstrates microcalcifications on mammograms [44], [45], [46].WT based techniques are applied on the remaining mammograms, those are obviously abnormal, to detect possible microcalcifications. The goal of this work is to improve the detection performance and throughput of screening-mammography, thus providing a ‘second opinion ‘ to the radiologists. The state-of- the- art DWT computation algorithms are not suitable for practical applications with memory and delay constraints, as it is not a block transfonn. Hence in this work, the development of a Block DWT (BDWT) computational structure having low processing memory requirement has also been taken up.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Dept. of Statistics, CUSAT

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Mit Hilfe der Vorhersage von Kontexten können z. B. Dienste innerhalb einer ubiquitären Umgebung proaktiv an die Bedürfnisse der Nutzer angepasst werden. Aus diesem Grund hat die Kontextvorhersage einen signifikanten Stellenwert innerhalb des ’ubiquitous computing’. Nach unserem besten Wissen, verwenden gängige Ansätze in der Kontextvorhersage ausschließlich die Kontexthistorie des Nutzers als Datenbasis, dessen Kontexte vorhersagt werden sollen. Im Falle, dass ein Nutzer unerwartet seine gewohnte Verhaltensweise ändert, enthält die Kontexthistorie des Nutzers keine geeigneten Informationen, um eine zuverlässige Kontextvorhersage zu gewährleisten. Daraus folgt, dass Vorhersageansätze, die ausschließlich die Kontexthistorie des Nutzers verwenden, dessen Kontexte vorhergesagt werden sollen, fehlschlagen könnten. Um die Lücke der fehlenden Kontextinformationen in der Kontexthistorie des Nutzers zu schließen, führen wir den Ansatz zur kollaborativen Kontextvorhersage (CCP) ein. Dabei nutzt CCP bestehende direkte und indirekte Relationen, die zwischen den Kontexthistorien der verschiedenen Nutzer existieren können, aus. CCP basiert auf der Singulärwertzerlegung höherer Ordnung, die bereits erfolgreich in bestehenden Empfehlungssystemen eingesetzt wurde. Um Aussagen über die Vorhersagegenauigkeit des CCP Ansatzes treffen zu können, wird dieser in drei verschiedenen Experimenten evaluiert. Die erzielten Vorhersagegenauigkeiten werden mit denen von drei bekannten Kontextvorhersageansätzen, dem ’Alignment’ Ansatz, dem ’StatePredictor’ und dem ’ActiveLeZi’ Vorhersageansatz, verglichen. In allen drei Experimenten werden als Evaluationsbasis kollaborative Datensätze verwendet. Anschließend wird der CCP Ansatz auf einen realen kollaborativen Anwendungsfall, den proaktiven Schutz von Fußgängern, angewendet. Dabei werden durch die Verwendung der kollaborativen Kontextvorhersage Fußgänger frühzeitig erkannt, die potentiell Gefahr laufen, mit einem sich nähernden Auto zu kollidieren. Als kollaborative Datenbasis werden reale Bewegungskontexte der Fußgänger verwendet. Die Bewegungskontexte werden mittels Smartphones, welche die Fußgänger in ihrer Hosentasche tragen, gesammelt. Aus dem Grund, dass Kontextvorhersageansätze in erster Linie personenbezogene Kontexte wie z.B. Standortdaten oder Verhaltensmuster der Nutzer als Datenbasis zur Vorhersage verwenden, werden rechtliche Evaluationskriterien aus dem Recht des Nutzers auf informationelle Selbstbestimmung abgeleitet. Basierend auf den abgeleiteten Evaluationskriterien, werden der CCP Ansatz und weitere bekannte kontextvorhersagende Ansätze bezüglich ihrer Rechtsverträglichkeit untersucht. Die Evaluationsergebnisse zeigen die rechtliche Kompatibilität der untersuchten Vorhersageansätze bezüglich des Rechtes des Nutzers auf informationelle Selbstbestimmung auf. Zum Schluss wird in der Dissertation ein Ansatz für die verteilte und kollaborative Vorhersage von Kontexten vorgestellt. Mit Hilfe des Ansatzes wird eine Möglichkeit aufgezeigt, um den identifizierten rechtlichen Probleme, die bei der Vorhersage von Kontexten und besonders bei der kollaborativen Vorhersage von Kontexten, entgegenzuwirken.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The measurement of feed intake, feeding time and rumination time, summarized by the term feeding behavior, are helpful indicators for early recognition of animals which show deviations in their behavior. The overall objective of this work was the development of an early warning system for inadequate feeding rations and digestive and metabolic disorders, which prevention constitutes the basis for health, performance, and reproduction. In a literature review, the current state of the art and the suitability of different measurement tools to determine feeding behavior of ruminants was discussed. Five measurement methods based on different methodological approaches (visual observance, pressure transducer, electrical switches, electrical deformation sensors and acoustic biotelemetry), and three selected measurement techniques (the IGER Behavior Recorder, the Hi-Tag rumination monitoring system and RumiWatchSystem) were described, assessed and compared to each other within this review. In the second study, the new system for measuring feeding behavior of dairy cows was evaluated. The measurement of feeding behavior ensues through electromyography (EMG). For validation, the feeding behavior of 14 cows was determined by both the EMG system and by visual observation. The high correlation coefficients indicate that the current system is a reliable and suitable tool for monitoring the feeding behavior of dairy cows. The aim of a further study was to compare the DairyCheck (DC) system and two additional measurement systems for measuring rumination behavior in relation to efficiency, reliability and reproducibility, with respect to each other. The two additional systems were labeled as the Lely Qwes HR (HR) sensor, and the RumiWatchSystem (RW). Results of accordance of RW and DC to each other were high. The last study examined whether rumination time (RT) is affected by the onset of calving and if it might be a useful indicator for the prediction of imminent birth. Data analysis referred to the final 72h before the onset of calving, which were divided into twelve 6h-blocks. The results showed that RT was significantly reduced in the final 6h before imminent birth.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This thesis describes a methodology, a representation, and an implemented program for troubleshooting digital circuit boards at roughly the level of expertise one might expect in a human novice. Existing methods for model-based troubleshooting have not scaled up to deal with complex circuits, in part because traditional circuit models do not explicitly represent aspects of the device that troubleshooters would consider important. For complex devices the model of the target device should be constructed with the goal of troubleshooting explicitly in mind. Given that methodology, the principal contributions of the thesis are ways of representing complex circuits to help make troubleshooting feasible. Temporally coarse behavior descriptions are a particularly powerful simplification. Instantiating this idea for the circuit domain produces a vocabulary for describing digital signals. The vocabulary has a level of temporal detail sufficient to make useful predictions abut the response of the circuit while it remains coarse enough to make those predictions computationally tractable. Other contributions are principles for using these representations. Although not embodied in a program, these principles are sufficiently concrete that models can be constructed manually from existing circuit descriptions such as schematics, part specifications, and state diagrams. One such principle is that if there are components with particularly likely failure modes or failure modes in which their behavior is drastically simplified, this knowledge should be incorporated into the model. Further contributions include the solution of technical problems resulting from the use of explicit temporal representations and design descriptions with tangled hierarchies.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

We describe a technique for finding pixelwise correspondences between two images by using models of objects of the same class to guide the search. The object models are 'learned' from example images (also called prototypes) of an object class. The models consist of a linear combination ofsprototypes. The flow fields giving pixelwise correspondences between a base prototype and each of the other prototypes must be given. A novel image of an object of the same class is matched to a model by minimizing an error between the novel image and the current guess for the closest modelsimage. Currently, the algorithm applies to line drawings of objects. An extension to real grey level images is discussed.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Integration of inputs by cortical neurons provides the basis for the complex information processing performed in the cerebral cortex. Here, we propose a new analytic framework for understanding integration within cortical neuronal receptive fields. Based on the synaptic organization of cortex, we argue that neuronal integration is a systems--level process better studied in terms of local cortical circuitry than at the level of single neurons, and we present a method for constructing self-contained modules which capture (nonlinear) local circuit interactions. In this framework, receptive field elements naturally have dual (rather than the traditional unitary influence since they drive both excitatory and inhibitory cortical neurons. This vector-based analysis, in contrast to scalarsapproaches, greatly simplifies integration by permitting linear summation of inputs from both "classical" and "extraclassical" receptive field regions. We illustrate this by explaining two complex visual cortical phenomena, which are incompatible with scalar notions of neuronal integration.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The image comparison operation ??sessing how well one image matches another ??rms a critical component of many image analysis systems and models of human visual processing. Two norms used commonly for this purpose are L1 and L2, which are specific instances of the Minkowski metric. However, there is often not a principled reason for selecting one norm over the other. One way to address this problem is by examining whether one metric better captures the perceptual notion of image similarity than the other. With this goal, we examined perceptual preferences for images retrieved on the basis of the L1 versus the L2 norm. These images were either small fragments without recognizable content, or larger patterns with recognizable content created via vector quantization. In both conditions the subjects showed a consistent preference for images matched using the L1 metric. These results suggest that, in the domain of natural images of the kind we have used, the L1 metric may better capture human notions of image similarity.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

We present a component-based approach for recognizing objects under large pose changes. From a set of training images of a given object we extract a large number of components which are clustered based on the similarity of their image features and their locations within the object image. The cluster centers build an initial set of component templates from which we select a subset for the final recognizer. In experiments we evaluate different sizes and types of components and three standard techniques for component selection. The component classifiers are finally compared to global classifiers on a database of four objects.