49 resultados para 2d-page
Comparison of standard image segmentation methods for segmentation of brain tumors from 2D MR images
Resumo:
In the analysis of medical images for computer-aided diagnosis and therapy, segmentation is often required as a preliminary step. Medical image segmentation is a complex and challenging task due to the complex nature of the images. The brain has a particularly complicated structure and its precise segmentation is very important for detecting tumors, edema, and necrotic tissues in order to prescribe appropriate therapy. Magnetic Resonance Imaging is an important diagnostic imaging technique utilized for early detection of abnormal changes in tissues and organs. It possesses good contrast resolution for different tissues and is, thus, preferred over Computerized Tomography for brain study. Therefore, the majority of research in medical image segmentation concerns MR images. As the core juncture of this research a set of MR images have been segmented using standard image segmentation techniques to isolate a brain tumor from the other regions of the brain. Subsequently the resultant images from the different segmentation techniques were compared with each other and analyzed by professional radiologists to find the segmentation technique which is the most accurate. Experimental results show that the Otsu’s thresholding method is the most suitable image segmentation method to segment a brain tumor from a Magnetic Resonance Image.
Resumo:
Traders in the financial world are assessed by the amount of money they make and, increasingly, by the amount of money they make per unit of risk taken, a measure known as the Sharpe Ratio. Little is known about the average Sharpe Ratio among traders, but the Efficient Market Hypothesis suggests that traders, like asset managers, should not outperform the broad market. Here we report the findings of a study conducted in the City of London which shows that a population of experienced traders attain Sharpe Ratios significantly higher than the broad market. To explain this anomaly we examine a surrogate marker of prenatal androgen exposure, the second-to-fourth finger length ratio (2D:4D), which has previously been identified as predicting a trader's long term profitability. We find that it predicts the amount of risk taken by traders but not their Sharpe Ratios. We do, however, find that the traders' Sharpe Ratios increase markedly with the number of years they have traded, a result suggesting that learning plays a role in increasing the returns of traders. Our findings present anomalous data for the Efficient Markets Hypothesis.
Resumo:
A system is described for calculating volume from a sequence of multiplanar 2D ultrasound images. Ultrasound images are captured using a video digitising card (Hauppauge Win/TV card) installed in a personal computer, and regions of interest transformed into 3D space using position and orientation data obtained from an electromagnetic device (Polbemus, Fastrak). The accuracy of the system was assessed by scanning 10 water filled balloons (13-141 ml), 10 kidneys (147 200 ml) and 16 fetal livers (8 37 ml) in water using an Acuson 128XP/10 (5 MHz curvilinear probe). Volume was calculated using the ellipsoid, planimetry, tetrahedral and ray tracing methods and compared with the actual volume measured by weighing (balloons) and water displacement (kidneys and livers). The mean percentage error for the ray tracing method was 0.9 ± 2.4%, 2.7 ± 2.3%, 6.6 ± 5.4% for balloons, kidneys and livers, respectively. So far the system has been used clinically to scan fetal livers and lungs, neonate brain ventricles and adult prostate glands.
Resumo:
This paper describes a novel method for determining the extrinsic calibration parameters between 2D and 3D LIDAR sensors with respect to a vehicle base frame. To recover the calibration parameters we attempt to optimize the quality of a 3D point cloud produced by the vehicle as it traverses an unknown, unmodified environment. The point cloud quality metric is derived from Rényi Quadratic Entropy and quantifies the compactness of the point distribution using only a single tuning parameter. We also present a fast approximate method to reduce the computational requirements of the entropy evaluation, allowing unsupervised calibration in vast environments with millions of points. The algorithm is analyzed using real world data gathered in many locations, showing robust calibration performance and substantial speed improvements from the approximations.
Resumo:
Papua New Guinea has reformed its colonial established education system and made huge investments with the help of donors to achieve equal access and quality education for all its citizens. Despite this national aspiration and these policy reforms and investments, secondary schools that enrol grade 9 students who are relatively equal in education ability show huge disparities in their grade 10 academic performances. This study examined perceptions of students, teachers and principals regarding factors affecting the disparity in academic performance in the context of a developing country. The central question for the study is: What are the perceptions of students and teachers of the factors that affect disparities in secondary schools' academic performance? This qualitative case study involved two high and three low academic performing secondary schools in Western Highlands Province of Papua New Guinea. Primary data were collected through focus groups and semi-structured interviews involving 112 participants. Students and teachers are key participants in this study, as it intends to find out the realities of schools, yet they are an under-researched group. A postcolonial and sense of community conceptual framework was developed for the analysis of the participants. perceptions. In addition, scholarship on school effectiveness and equity in education informed the interpretation of the findings. Three themes were evident in participants. views. First, participants expressed their view that differences in academic performance were related to the adequacy and equitability of resources. The inequities in resource inputs led some of them to coin the metaphor of .back page and front page. schools. Second, many expressed the view that deficiencies in implementing bilingual education, given the difficulty of catering for 800 vernacular languages, contribute to poor English proficiency and subsequent poor academic performance. Finally, participants believed that, in order to have a positive school culture, it is necessary for educators to recognise and respect contemporary students. identities, communal/tribal membership and needs. This study has implications for national education policy on resource allocation to address equality and equity, bilingual education and teacher education. Moreover, as the study found that high academic performance in this context is also influenced by intra-school social relationships, these relationships need to be nurtured. When appropriately nurtured, they become an important factor in sustaining quality education for all secondary school students. This thesis has laid the foundations for further research and invites further investigations into policy and implementation of school reforms aimed at improving academic achievement.
Resumo:
Unsteady numerical simulation of Rayleigh Benard convection heat transfer from a 2D channel is performed. The oscillatory behavior is attributed to recirculation of ascending and descending flows towards the core of the channel producing organized rolled motions. Variation of the parameters such as Reynolds number, channel outlet flow area and inclination of the channel are considered. Increasing Reynolds number (for a fixed Rayleigh number), delays the generation of vortices. The reduction in the outflow area leads to the later and the less vortex generation. As the time progresses, more vortices are generated, but the reinforced mean velocity does not let the eddies to enter the core of the channel. Therefore, they attach to the wall and reduce the heat transfer area. The inclination of the channel (both positive and negative) induces the generated vortices to get closer to each other and make an enlarged vortex.
Resumo:
Facebook is approaching ubiquity in the social habits and practice of many students. However, its use in higher education has been criticised (Maranto & Barton, 2010) because it can remove or blur academic boundaries. Despite these concerns, there is strong potential to use Facebook to support new students to communicate and interact with each other (Cheung, Chiu, & Lee, 2010). This paper shows how Facebook can be used by teaching staff to communicate more effectively with students. Further, it shows how it can provide a way to represent and include beginning students’ thoughts, opinions and feedback as an element of the learning design and responsive feed-forward into lectures and tutorial activities. We demonstrate how an embedded social media strategy can be used to complement and enhance the first year curriculum experience by functioning as a transition device for student support and activating Kift’s (2009) organising principles for first year curriculum design.
Resumo:
The rapid increase in the deployment of CCTV systems has led to a greater demand for algorithms that are able to process incoming video feeds. These algorithms are designed to extract information of interest for human operators. During the past several years, there has been a large effort to detect abnormal activities through computer vision techniques. Typically, the problem is formulated as a novelty detection task where the system is trained on normal data and is required to detect events which do not fit the learned `normal' model. Many researchers have tried various sets of features to train different learning models to detect abnormal behaviour in video footage. In this work we propose using a Semi-2D Hidden Markov Model (HMM) to model the normal activities of people. The outliers of the model with insufficient likelihood are identified as abnormal activities. Our Semi-2D HMM is designed to model both the temporal and spatial causalities of the crowd behaviour by assuming the current state of the Hidden Markov Model depends not only on the previous state in the temporal direction, but also on the previous states of the adjacent spatial locations. Two different HMMs are trained to model both the vertical and horizontal spatial causal information. Location features, flow features and optical flow textures are used as the features for the model. The proposed approach is evaluated using the publicly available UCSD datasets and we demonstrate improved performance compared to other state of the art methods.
Resumo:
This paper is concerned with the optimal path planning and initialization interval of one or two UAVs in presence of a constant wind. The method compares previous literature results on synchronization of UAVs along convex curves, path planning and sampling in 2D and extends it to 3D. This method can be applied to observe gas/particle emissions inside a control volume during sampling loops. The flight pattern is composed of two phases: a start-up interval and a sampling interval which is represented by a semi-circular path. The methods were tested in four complex model test cases in 2D and 3D as well as one simulated real world scenario in 2D and one in 3D.
Resumo:
This paper addresses the problem of automatically estimating the relative pose between a push-broom LIDAR and a camera without the need for artificial calibration targets or other human intervention. Further we do not require the sensors to have an overlapping field of view, it is enough that they observe the same scene but at different times from a moving platform. Matching between sensor modalities is achieved without feature extraction. We present results from field trials which suggest that this new approach achieves an extrinsic calibration accuracy of millimeters in translation and deci-degrees in rotation.
Resumo:
This chapter considers the ways in which contemporary children’s literature depicts reading in changing times, with a particular eye on the cultural definitions of ‘reading’ being offered to young people in the age of the tablet computer. A number of picture books, in codex and app form, speak to changing times for reading by their emphasis on the value of books and reading as technologies of literature and of the self. Attending to valuations of literacy and literature within children’s texts provides insight into anxieties about books in the electronic age.
Resumo:
This paper argues that governments around the world need to take immediate coordinated action to reverse the 'book famine.' There are over 129 million book titles in the world, but persons with print disabilities can obtain less than 7% of these titles in formats that they can read. The situation is most acute in developing countries, where less than 1% of books are accessible. Two recent international developments – the United Nations Convention on the Rights of Persons with Disabilities (‘CRPD’) and the new Marrakesh Treaty to Facilitate Access to Published Works for Persons who are Blind, Visually Impaired, or otherwise Print Disabled (somewhat ironically nicknamed the ‘VIP Treaty’) – suggest that nation states are increasingly willing to take action to reverse the book famine. The Marrakesh Treaty promises to level out some of the disparity of access between people in developed and developing nations and remove the need for each jurisdiction to digitise a separate copy of each book. This is a remarkable advance, and suggests the beginnings of a possible paradigm shift in global copyright politicsmade all the more remarkable in the face of heated opposition by global copyright industry representatives. Now that the Marrakesh Treaty has been concluded, however, we argue that a substantial exercise of global political will is required to (a) invest the funds required to digitise existing books; and (b) avert any further harm by ensuring that books published in the future are made accessible upon their release.