970 resultados para Image space
Resumo:
This article proposes a methodology to address the urban evolutionary process, demonstrating how it is reflected in literature. It focuses on “literary space,” presented as a territory defined by the period setting or as evoked by the characters, which can be georeferenced and drawn on a map. It identifies the different locations of literary space in relation to urban development and the economic, political, and social context of the city. We suggest a new approach for mapping a relatively comprehensive body of literature by combining literary criticism, urban history, and geographic information systems (GIS). The home-range concept, used in animal ecology, has been adapted to reveal the size and location of literary space. This interdisciplinary methodology is applied in a case study to nineteenth- and twentieth-century novels involving the city of Lisbon. The developing concepts of cumulative literary space and common literary space introduce size calculations in addition to location and structure, previously developed by other researchers. Sequential and overlapping analyses of literary space throughout time have the advantage of presenting comparable and repeatable results for other researchers using a different body of literary works or studying another city. Results show how city changes shaped perceptions of the urban space as it was lived and experienced. A small core area, correspondent to a part of the city center, persists as literary space in all the novels analyzed. Furthermore, the literary space does not match the urban evolution. There is a time lag for embedding new urbanized areas in the imagined literary scenario.
Resumo:
Dissertação para obtenção do Grau de Mestre em Engenharia Biomédica
Resumo:
A Work Project, presented as part of the requirements for the Award of a Masters Degree in Management from the NOVA – School of Business and Economics
Resumo:
This paper incorporates egocentric comparisons into a human capital accumulation model and studies the evolution of positive self image over time. The paper shows that the process of human capital accumulation together with egocentric comparisons imply that positive self image of a cohort is first increasing and then decreasing over time. Additionally, the paper finds that positive self image: (1) peaks earlier in activities where skill depreciation is higher, (2) is smaller in activities where the distribution of income is more dispersed, (3) is not a stable characteristic of an individual, and (4) is higher for more patient individuals.
Resumo:
A Work Project, presented as part of the requirements for the Award of a Masters Degree in Management from the NOVA – School of Business and Economics
Resumo:
This paper investigates the implications of individuals’ mistaken beliefs of their abilities on incentives in organizations using the principal-agent model of moral hazard. The paper shows that if effort is observable, then an agent’s mistaken beliefs about own ability are always favorable to the principal. However, if effort is unobservable, then an agent’s mistaken beliefs about own ability can be either favorable or unfavorable to the principal. The paper provides conditions under which an agent’s over estimation about own ability is favorable to the principal when effort is unobservable. Finally, the paper shows that workers’ mistaken beliefs about their coworkers’ abilities make interdependent incentive schemes more attractive to firms than individualistic incentive schemes.
Resumo:
This paper analyzes the implications of worker overestimation of productivity for firms in which incentives take the form of tournaments. Each worker overestimates his productivity but is aware of the bias in his opponent’s self-assessment. The manager of the firm, on the other hand, correctly assesses workers’ productivities and self-beliefs when setting tournament prizes. The paper shows that, under a variety of circumstances, firms make higher profits when workers have positive self-image than if workers do not. By contrast, workers’ welfare declines due to their own misguided choices.
Resumo:
In this thesis a semi-automated cell analysis system is described through image processing. To achieve this, an image processing algorithm was studied in order to segment cells in a semi-automatic way. The main goal of this analysis is to increase the performance of cell image segmentation process, without affecting the results in a significant way. Even though, a totally manual system has the ability of producing the best results, it has the disadvantage of taking too long and being repetitive, when a large number of images need to be processed. An active contour algorithm was tested in a sequence of images taken by a microscope. This algorithm, more commonly known as snakes, allowed the user to define an initial region in which the cell was incorporated. Then, the algorithm would run several times, making the initial region contours to converge to the cell boundaries. With the final contour, it was possible to extract region properties and produce statistical data. This data allowed to say that this algorithm produces similar results to a purely manual system but at a faster rate. On the other hand, it is slower than a purely automatic way but it allows the user to adjust the contour, making it more versatile and tolerant to image variations.
Resumo:
Breast cancer is the most common cancer among women, being a major public health problem. Worldwide, X-ray mammography is the current gold-standard for medical imaging of breast cancer. However, it has associated some well-known limitations. The false-negative rates, up to 66% in symptomatic women, and the false-positive rates, up to 60%, are a continued source of concern and debate. These drawbacks prompt the development of other imaging techniques for breast cancer detection, in which Digital Breast Tomosynthesis (DBT) is included. DBT is a 3D radiographic technique that reduces the obscuring effect of tissue overlap and appears to address both issues of false-negative and false-positive rates. The 3D images in DBT are only achieved through image reconstruction methods. These methods play an important role in a clinical setting since there is a need to implement a reconstruction process that is both accurate and fast. This dissertation deals with the optimization of iterative algorithms, with parallel computing through an implementation on Graphics Processing Units (GPUs) to make the 3D reconstruction faster using Compute Unified Device Architecture (CUDA). Iterative algorithms have shown to produce the highest quality DBT images, but since they are computationally intensive, their clinical use is currently rejected. These algorithms have the potential to reduce patient dose in DBT scans. A method of integrating CUDA in Interactive Data Language (IDL) is proposed in order to accelerate the DBT image reconstructions. This method has never been attempted before for DBT. In this work the system matrix calculation, the most computationally expensive part of iterative algorithms, is accelerated. A speedup of 1.6 is achieved proving the fact that GPUs can accelerate the IDL implementation.
Resumo:
Since the invention of photography humans have been using images to capture, store and analyse the act that they are interested in. With the developments in this field, assisted by better computers, it is possible to use image processing technology as an accurate method of analysis and measurement. Image processing's principal qualities are flexibility, adaptability and the ability to easily and quickly process a large amount of information. Successful examples of applications can be seen in several areas of human life, such as biomedical, industry, surveillance, military and mapping. This is so true that there are several Nobel prizes related to imaging. The accurate measurement of deformations, displacements, strain fields and surface defects are challenging in many material tests in Civil Engineering because traditionally these measurements require complex and expensive equipment, plus time consuming calibration. Image processing can be an inexpensive and effective tool for load displacement measurements. Using an adequate image acquisition system and taking advantage of the computation power of modern computers it is possible to accurately measure very small displacements with high precision. On the market there are already several commercial software packages. However they are commercialized at high cost. In this work block-matching algorithms will be used in order to compare the results from image processing with the data obtained with physical transducers during laboratory load tests. In order to test the proposed solutions several load tests were carried out in partnership with researchers from the Civil Engineering Department at Universidade Nova de Lisboa (UNL).
Resumo:
The mobile IT era is here, it is still growing and expanding at a steady rate and, most of all, it is entertaining. Mobile devices are used for entertainment, whether social through the so-called social networks, or private through web browsing, video watching or gaming. Youngsters make heavy use of these devices, and even small children show impressive adaptability and skill. However not much attention is directed towards education, especially in the case of young children. Too much time is usually spent in games which only purpose is to keep children entertained, time that could be put to better use such as developing elementary geometric notions. Taking advantage of this pocket computer scenario, it is proposed an application geared towards small children in the 6 – 9 age group that allows them to consolidate knowledge regarding geometric shapes, forming a stepping stone that leads to some fundamental mathematical knowledge to be exercised later on. To achieve this goal, the application will detect simple geometric shapes like squares, circles and triangles using the device’s camera. The novelty of this application will be a core real-time detection system designed and developed from the ground up for mobile devices, taking into account their characteristic limitations such as reduced processing power, memory and battery. User feedback was be gathered, aggregated and studied to assess the educational factor of the application.
Resumo:
NSBE - UNL
Resumo:
Product fundamentals are essential in explaining heterogeneity in the product space. The scope for adapting and transferring capabilities into the production of different goods determines the speed and intensity of the structural transformation process and entails dissimilar development opportunities for nations. Future specialization patterns become then partly determined by the current network of products’ relatedness. Building on previous literature, this paper explicitly compares methodological concepts of product connectivity to conclude in favor of the density measure we propose combined with the Revealed Relatedness Index (RRI) approach presented by Freitas and Salvado (2011). Overall, RRI specifications displayed more consistent behavior when different time horizons are equated.
Resumo:
INTRODUCTION: The objective was to identify space and space-time risk clusters for the occurrence of deaths in a priority city for the control of tuberculosis (TB) in the Brazilian Northeast. METHODS: Ecological research was undertaken in the City of São Luis/Maranhão. Cases were considered that resulted in deaths in the population living in the urban region of the city with pulmonary TB as the basic cause, between 2008 and 2012. To detect space and space-time clusters of deaths due to pulmonary TB in the census sectors, the spatial analysis scan technique was used. RESULTS: In total, 221 deaths by TB occurred, 193 of which were due to pulmonary TB. Approximately 95% of the cases (n=183) were geocoded. Two significant spatial clusters were identified, the first of which showed a mortality rate of 5.8 deaths per 100,000 inhabitants per year and a high relative risk of 3.87. The second spatial cluster showed a mortality rate of 0.4 deaths per 100,000 inhabitants per year and a low relative risk of 0.10. A significant cluster was observed in the space-time analysis between 11/01/2008 and 04/30/2011, with a mortality rate of 8.10 deaths per 100,000 inhabitants per year and a high relative risk (3.0). CONCLUSIONS: The knowledge of priority sites for the occurrence of deaths can support public management to reduce inequities in the access to health services and permit an optimization of the resources and teams in the control of pulmonary TB, providing support for specific strategies focused on the most vulnerable populations.