918 resultados para Image of mathematics
Resumo:
In this thesis we have presented several inventory models of utility. Of these inventory with retrial of unsatisfied demands and inventory with postponed work are quite recently introduced concepts, the latt~~ being introduced for the first time. Inventory with service time is relatively new with a handful of research work reported. The di lficuity encoLlntered in inventory with service, unlike the queueing process, is that even the simplest case needs a 2-dimensional process for its description. Only in certain specific cases we can introduce generating function • to solve for the system state distribution. However numerical procedures can be developed for solving these problem.
Resumo:
This thesis contains a study of conservation laws of fluid mechanics. These conservation laws though classical, have been put to extensive studies in t:he past many decades
Resumo:
The thesis is divided into nine chapters including introduction. Mainly we determine ultra L-topologies in the lattice of L- topologies and study their properties. We nd some sublattices in the lattice of L-topologies and study their properties. Also we study the lattice structure of the set of all L-closure operators on a set X.
Resumo:
Low grade and High grade Gliomas are tumors that originate in the glial cells. The main challenge in brain tumor diagnosis is whether a tumor is benign or malignant, primary or metastatic and low or high grade. Based on the patient's MRI, a radiologist could not differentiate whether it is a low grade Glioma or a high grade Glioma. Because both of these are almost visually similar, autopsy confirms the diagnosis of low grade with high-grade and infiltrative features. In this paper, textural description of Grade I and grade III Glioma are extracted using First order statistics and Gray Level Co-occurance Matrix Method (GLCM). Textural features are extracted from 16X16 sub image of the segmented Region of Interest(ROI) .In the proposed method, first order statistical features such as contrast, Intensity , Entropy, Kurtosis and spectral energy and GLCM features extracted were showed promising results. The ranges of these first order statistics and GLCM based features extracted are highly discriminant between grade I and Grade III. In this study which gives statistical textural information of grade I and grade III Glioma which is very useful for further classification and analysis and thus assisting Radiologist in greater extent.
Resumo:
In this paper the effectiveness of a novel method of computer assisted pedicle screw insertion was studied using testing of hypothesis procedure with a sample size of 48. Pattern recognition based on geometric features of markers on the drill has been performed on real time optical video obtained from orthogonally placed CCD cameras. The study reveals the exactness of the calculated position of the drill using navigation based on CT image of the vertebra and real time optical video of the drill. The significance value is 0.424 at 95% confidence level which indicates good precision with a standard mean error of only 0.00724. The virtual vision method is less hazardous to both patient and the surgeon
Resumo:
One can do research in pointfree topology in two ways. The rst is the contravariant way where research is done in the category Frm but the ultimate objective is to obtain results in Loc. The other way is the covariant way to carry out research in the category Loc itself directly. According to Johnstone [23], \frame theory is lattice theory applied to topology whereas locale theory is topology itself". The most part of this thesis is written according to the rst view. In this thesis, we make an attempt to study about 1. the frame counterparts of maximal compactness, minimal Hausdor - ness and reversibility, 2. the automorphism groups of a nite frame and its relation with the subgroups of the permutation group on the generator set of the frame
Resumo:
Department of Statistics, Cochin University of Science & Technology, Part of this work has been supported by grants from DST and CSIR, Government of India. 2Department of Mathematics and Statistics, IIT Kanpur
Resumo:
Mathematicians who make significant contributions towards development of mathematical science are not getting the recognition they deserve, according to Cusat Vice Chancellor Dr. J. Letha. She was delivering the inaugural address at the International Conference on Semigroups, Algebras and Applications (ICSA 2015) organized by Dept. of Mathematics, Cochin university of Science and Technology on Thursday. Mathematics plays an important role in the development of basic science. The academic community should not delay in accepting and appreciating this, Dr. Letha added. Dr. Godfrey Louis, Dean, Faculty of Science presided over the inaugural function. Prof. P. G. Romeo, Head, Dept. of Mathematics, Prof. John C. Meakin, University of Nebraska-Lincoln, USA, Prof. A. N. Balchand, Syndicate Member, Prof. K. A. Zakkariya, Syndicate Member, Prof. A. R. Rajan, Emeritus Professor, University of Kerala and Prof. A. Vijayakumar, Dept. of Mathematics, Cusat addressed the gathering. Around 50 research papers will be presented at the Conference.Prof. K. S. S. Nambooripad, the internationally famous mathematician with enormous contributions in the field of semigroup theory, who has attained eighty years of age will be felicitated on 18th at 5.00 pm during a function presided over by Dr. K. Poulose Jacob, Pro-Vice Chancellor. Dr. Suresh Das, Executive President, KSCSTE, Dr. A. M. Mathai, Director, CMSS and President, Indian Mathematical Society, Dr. P. G. Romeo, Head, Dept. of Mathematics and Dr. B. Lakshmi, Dept. of Mathematics will speak on the occasion.
Resumo:
We describe a technique for finding pixelwise correspondences between two images by using models of objects of the same class to guide the search. The object models are 'learned' from example images (also called prototypes) of an object class. The models consist of a linear combination ofsprototypes. The flow fields giving pixelwise correspondences between a base prototype and each of the other prototypes must be given. A novel image of an object of the same class is matched to a model by minimizing an error between the novel image and the current guess for the closest modelsimage. Currently, the algorithm applies to line drawings of objects. An extension to real grey level images is discussed.
Resumo:
The capability of estimating the walking direction of people would be useful in many applications such as those involving autonomous cars and robots. We introduce an approach for estimating the walking direction of people from images, based on learning the correct classification of a still image by using SVMs. We find that the performance of the system can be improved by classifying each image of a walking sequence and combining the outputs of the classifier. Experiments were performed to evaluate our system and estimate the trade-off between number of images in walking sequences and performance.
Resumo:
El presente documento es un estudio detallado del problema conocido bajo el título de Problema de Alhacén. Este problema fue formulado en el siglo X por el filósofo y matemático árabe conocido en occidente bajo el nombre de Alhacén. El documento hace una breve presentación del filósofo y una breve reseña de su trascendental tratado de óptica Kitab al-Manazir. A continuación el documento se detiene a estudiar cuidadosamente los lemas requeridos para enfrentar el problema y se presentan las soluciones para el caso de los espejos esféricos (convexos y cóncavos), cilíndricos y cónicos. También se ofrece una conjetura que habría de explicar la lógica del descubrimiento implícita en la solución que ofreció Alhacén. Tanto los lemas como las soluciones se han modelado en los software de geometría dinámica Cabri II-Plus y Cabri 3-D. El lector interesado en seguir dichas modelaciones debe contar con los programas mencionados para adelantar la lectura de los archivos. En general, estas presentaciones constan de tres partes: (i) formulación del problema (se formula en forma concisa el problema); (ii) esquema general de la construcción (se presentan los pasos esenciales que conducen a la construcción solicitada y las construcciones auxiliares que demanda el problema), esta parte se puede seguir en los archivos de Cabri; y (iii) demostración (se ofrece la justificación detallada de la construcción requerida). Los archivos en Cabri II plus cuentan con botones numerados que pueden activarse haciendo “Click” sobre ellos. La numeración corresponde a la numeración presente en el documento. El lector puede desplazar a su antojo los puntos libres que pueden reconocerse porque ellos se distinguen con la siguiente marca (º). Los puntos restantes no pueden modificarse pues son el resultado de construcciones adelantadas y ajustadas a los protocolos recomendados en el esquema general.
Resumo:
Con base en las preocupaciones de la época sobre la susceptibilidad al romance y acoso sexual de la trabajadora de oficina, este artículo propone explorar la representación de secretarias y taquígrafas en TheType-Writer Girl (1897), de Grant Allan, y en North of Fifty-Three (1914), de Bertrand Sinclair. Se mirará la presión para adquirir la independencia económica y autonomía personal a través del trabajo en oficina. También, la necesidad de ajustarse a ideologías presentes en la sociedad, que abogaban un destino predeterminado de matrimonio e hijos para la mujer. Se pregunta si el género de literatura workinggirl de esos tiempos abogaba la imagende la mujer independiente, trabajadora y emocionalmente realizada, o si el trabajo de oficina era interpretado como un paso natural hacia una evolución de niñas a madres. Este artículo también cuestiona si la oficina ficcional fue presentada como una ubicación de autonomía y potencial femeninos, o si fue vista como un espacio hostil y peligroso del que debería escapar lo más pronto posible para mantener la seguridad del hogar.
Resumo:
This thesis proposes a solution to the problem of estimating the motion of an Unmanned Underwater Vehicle (UUV). Our approach is based on the integration of the incremental measurements which are provided by a vision system. When the vehicle is close to the underwater terrain, it constructs a visual map (so called "mosaic") of the area where the mission takes place while, at the same time, it localizes itself on this map, following the Concurrent Mapping and Localization strategy. The proposed methodology to achieve this goal is based on a feature-based mosaicking algorithm. A down-looking camera is attached to the underwater vehicle. As the vehicle moves, a sequence of images of the sea-floor is acquired by the camera. For every image of the sequence, a set of characteristic features is detected by means of a corner detector. Then, their correspondences are found in the next image of the sequence. Solving the correspondence problem in an accurate and reliable way is a difficult task in computer vision. We consider different alternatives to solve this problem by introducing a detailed analysis of the textural characteristics of the image. This is done in two phases: first comparing different texture operators individually, and next selecting those that best characterize the point/matching pair and using them together to obtain a more robust characterization. Various alternatives are also studied to merge the information provided by the individual texture operators. Finally, the best approach in terms of robustness and efficiency is proposed. After the correspondences have been solved, for every pair of consecutive images we obtain a list of image features in the first image and their matchings in the next frame. Our aim is now to recover the apparent motion of the camera from these features. Although an accurate texture analysis is devoted to the matching pro-cedure, some false matches (known as outliers) could still appear among the right correspon-dences. For this reason, a robust estimation technique is used to estimate the planar transformation (homography) which explains the dominant motion of the image. Next, this homography is used to warp the processed image to the common mosaic frame, constructing a composite image formed by every frame of the sequence. With the aim of estimating the position of the vehicle as the mosaic is being constructed, the 3D motion of the vehicle can be computed from the measurements obtained by a sonar altimeter and the incremental motion computed from the homography. Unfortunately, as the mosaic increases in size, image local alignment errors increase the inaccuracies associated to the position of the vehicle. Occasionally, the trajectory described by the vehicle may cross over itself. In this situation new information is available, and the system can readjust the position estimates. Our proposal consists not only in localizing the vehicle, but also in readjusting the trajectory described by the vehicle when crossover information is obtained. This is achieved by implementing an Augmented State Kalman Filter (ASKF). Kalman filtering appears as an adequate framework to deal with position estimates and their associated covariances. Finally, some experimental results are shown. A laboratory setup has been used to analyze and evaluate the accuracy of the mosaicking system. This setup enables a quantitative measurement of the accumulated errors of the mosaics created in the lab. Then, the results obtained from real sea trials using the URIS underwater vehicle are shown.
Resumo:
Urban flood inundation models require considerable data for their parameterisation, calibration and validation. TerraSAR-X should be suitable for urban flood detection because of its high resolution in stripmap/spotlight modes. The paper describes ongoing work on a project to assess how well TerraSAR-X can detect flooded regions in urban areas, and how well these can constrain the parameters of an urban flood model. The study uses a TerraSAR-X image of a 1-in-150 year flood near Tewkesbury, UK , in 2007, for which contemporaneous aerial photography exists for validation. The DLR SETES SAR simulator was used in conjunction with LiDAR data to estimate regions of the image in which water would not be visible due to shadow or layover caused by buildings and vegetation. An algorithm for the delineation of flood water in urban areas is described, together with its validation using the aerial photographs.
Resumo:
A traditional method of validating the performance of a flood model when remotely sensed data of the flood extent are available is to compare the predicted flood extent to that observed. The performance measure employed often uses areal pattern-matching to assess the degree to which the two extents overlap. Recently, remote sensing of flood extents using synthetic aperture radar (SAR) and airborne scanning laser altimetry (LIDAR) has made more straightforward the synoptic measurement of water surface elevations along flood waterlines, and this has emphasised the possibility of using alternative performance measures based on height. This paper considers the advantages that can accrue from using a performance measure based on waterline elevations rather than one based on areal patterns of wet and dry pixels. The two measures were compared for their ability to estimate flood inundation uncertainty maps from a set of model runs carried out to span the acceptable model parameter range in a GLUE-based analysis. A 1 in 5-year flood on the Thames in 1992 was used as a test event. As is typical for UK floods, only a single SAR image of observed flood extent was available for model calibration and validation. A simple implementation of a two-dimensional flood model (LISFLOOD-FP) was used to generate model flood extents for comparison with that observed. The performance measure based on height differences of corresponding points along the observed and modelled waterlines was found to be significantly more sensitive to the channel friction parameter than the measure based on areal patterns of flood extent. The former was able to restrict the parameter range of acceptable model runs and hence reduce the number of runs necessary to generate an inundation uncertainty map. A result of this was that there was less uncertainty in the final flood risk map. The uncertainty analysis included the effects of uncertainties in the observed flood extent as well as in model parameters. The height-based measure was found to be more sensitive when increased heighting accuracy was achieved by requiring that observed waterline heights varied slowly along the reach. The technique allows for the decomposition of the reach into sections, with different effective channel friction parameters used in different sections, which in this case resulted in lower r.m.s. height differences between observed and modelled waterlines than those achieved by runs using a single friction parameter for the whole reach. However, a validation of the modelled inundation uncertainty using the calibration event showed a significant difference between the uncertainty map and the observed flood extent. While this was true for both measures, the difference was especially significant for the height-based one. This is likely to be due to the conceptually simple flood inundation model and the coarse application resolution employed in this case. The increased sensitivity of the height-based measure may lead to an increased onus being placed on the model developer in the production of a valid model