28 resultados para Search image
Resumo:
This paper incorporates egocentric comparisons into a human capital accumulation model and studies the evolution of positive self image over time. The paper shows that the process of human capital accumulation together with egocentric comparisons imply that positive self image of a cohort is first increasing and then decreasing over time. Additionally, the paper finds that positive self image: (1) peaks earlier in activities where skill depreciation is higher, (2) is smaller in activities where the distribution of income is more dispersed, (3) is not a stable characteristic of an individual, and (4) is higher for more patient individuals.
Resumo:
A Work Project, presented as part of the requirements for the Award of a Masters Degree in Management from the NOVA – School of Business and Economics
Resumo:
This paper investigates the implications of individuals’ mistaken beliefs of their abilities on incentives in organizations using the principal-agent model of moral hazard. The paper shows that if effort is observable, then an agent’s mistaken beliefs about own ability are always favorable to the principal. However, if effort is unobservable, then an agent’s mistaken beliefs about own ability can be either favorable or unfavorable to the principal. The paper provides conditions under which an agent’s over estimation about own ability is favorable to the principal when effort is unobservable. Finally, the paper shows that workers’ mistaken beliefs about their coworkers’ abilities make interdependent incentive schemes more attractive to firms than individualistic incentive schemes.
Resumo:
This paper analyzes the implications of worker overestimation of productivity for firms in which incentives take the form of tournaments. Each worker overestimates his productivity but is aware of the bias in his opponent’s self-assessment. The manager of the firm, on the other hand, correctly assesses workers’ productivities and self-beliefs when setting tournament prizes. The paper shows that, under a variety of circumstances, firms make higher profits when workers have positive self-image than if workers do not. By contrast, workers’ welfare declines due to their own misguided choices.
Resumo:
In this thesis a semi-automated cell analysis system is described through image processing. To achieve this, an image processing algorithm was studied in order to segment cells in a semi-automatic way. The main goal of this analysis is to increase the performance of cell image segmentation process, without affecting the results in a significant way. Even though, a totally manual system has the ability of producing the best results, it has the disadvantage of taking too long and being repetitive, when a large number of images need to be processed. An active contour algorithm was tested in a sequence of images taken by a microscope. This algorithm, more commonly known as snakes, allowed the user to define an initial region in which the cell was incorporated. Then, the algorithm would run several times, making the initial region contours to converge to the cell boundaries. With the final contour, it was possible to extract region properties and produce statistical data. This data allowed to say that this algorithm produces similar results to a purely manual system but at a faster rate. On the other hand, it is slower than a purely automatic way but it allows the user to adjust the contour, making it more versatile and tolerant to image variations.
Resumo:
Optimization is a very important field for getting the best possible value for the optimization function. Continuous optimization is optimization over real intervals. There are many global and local search techniques. Global search techniques try to get the global optima of the optimization problem. However, local search techniques are used more since they try to find a local minimal solution within an area of the search space. In Continuous Constraint Satisfaction Problems (CCSP)s, constraints are viewed as relations between variables, and the computations are supported by interval analysis. The continuous constraint programming framework provides branch-and-prune algorithms for covering sets of solutions for the constraints with sets of interval boxes which are the Cartesian product of intervals. These algorithms begin with an initial crude cover of the feasible space (the Cartesian product of the initial variable domains) which is recursively refined by interleaving pruning and branching steps until a stopping criterion is satisfied. In this work, we try to find a convenient way to use the advantages in CCSP branchand- prune with local search of global optimization applied locally over each pruned branch of the CCSP. We apply local search techniques of continuous optimization over the pruned boxes outputted by the CCSP techniques. We mainly use steepest descent technique with different characteristics such as penalty calculation and step length. We implement two main different local search algorithms. We use “Procure”, which is a constraint reasoning and global optimization framework, to implement our techniques, then we produce and introduce our results over a set of benchmarks.
Resumo:
Breast cancer is the most common cancer among women, being a major public health problem. Worldwide, X-ray mammography is the current gold-standard for medical imaging of breast cancer. However, it has associated some well-known limitations. The false-negative rates, up to 66% in symptomatic women, and the false-positive rates, up to 60%, are a continued source of concern and debate. These drawbacks prompt the development of other imaging techniques for breast cancer detection, in which Digital Breast Tomosynthesis (DBT) is included. DBT is a 3D radiographic technique that reduces the obscuring effect of tissue overlap and appears to address both issues of false-negative and false-positive rates. The 3D images in DBT are only achieved through image reconstruction methods. These methods play an important role in a clinical setting since there is a need to implement a reconstruction process that is both accurate and fast. This dissertation deals with the optimization of iterative algorithms, with parallel computing through an implementation on Graphics Processing Units (GPUs) to make the 3D reconstruction faster using Compute Unified Device Architecture (CUDA). Iterative algorithms have shown to produce the highest quality DBT images, but since they are computationally intensive, their clinical use is currently rejected. These algorithms have the potential to reduce patient dose in DBT scans. A method of integrating CUDA in Interactive Data Language (IDL) is proposed in order to accelerate the DBT image reconstructions. This method has never been attempted before for DBT. In this work the system matrix calculation, the most computationally expensive part of iterative algorithms, is accelerated. A speedup of 1.6 is achieved proving the fact that GPUs can accelerate the IDL implementation.
Resumo:
Since the invention of photography humans have been using images to capture, store and analyse the act that they are interested in. With the developments in this field, assisted by better computers, it is possible to use image processing technology as an accurate method of analysis and measurement. Image processing's principal qualities are flexibility, adaptability and the ability to easily and quickly process a large amount of information. Successful examples of applications can be seen in several areas of human life, such as biomedical, industry, surveillance, military and mapping. This is so true that there are several Nobel prizes related to imaging. The accurate measurement of deformations, displacements, strain fields and surface defects are challenging in many material tests in Civil Engineering because traditionally these measurements require complex and expensive equipment, plus time consuming calibration. Image processing can be an inexpensive and effective tool for load displacement measurements. Using an adequate image acquisition system and taking advantage of the computation power of modern computers it is possible to accurately measure very small displacements with high precision. On the market there are already several commercial software packages. However they are commercialized at high cost. In this work block-matching algorithms will be used in order to compare the results from image processing with the data obtained with physical transducers during laboratory load tests. In order to test the proposed solutions several load tests were carried out in partnership with researchers from the Civil Engineering Department at Universidade Nova de Lisboa (UNL).
Resumo:
The mobile IT era is here, it is still growing and expanding at a steady rate and, most of all, it is entertaining. Mobile devices are used for entertainment, whether social through the so-called social networks, or private through web browsing, video watching or gaming. Youngsters make heavy use of these devices, and even small children show impressive adaptability and skill. However not much attention is directed towards education, especially in the case of young children. Too much time is usually spent in games which only purpose is to keep children entertained, time that could be put to better use such as developing elementary geometric notions. Taking advantage of this pocket computer scenario, it is proposed an application geared towards small children in the 6 – 9 age group that allows them to consolidate knowledge regarding geometric shapes, forming a stepping stone that leads to some fundamental mathematical knowledge to be exercised later on. To achieve this goal, the application will detect simple geometric shapes like squares, circles and triangles using the device’s camera. The novelty of this application will be a core real-time detection system designed and developed from the ground up for mobile devices, taking into account their characteristic limitations such as reduced processing power, memory and battery. User feedback was be gathered, aggregated and studied to assess the educational factor of the application.
Resumo:
This paper studies the effects of monetary policy on mutual fund risk taking using a sample of Portuguese fixed-income mutual funds in the 2000-2012 period. Firstly I estimate time-varying measures of risk exposure (betas) for the individual funds, for the benchmark portfolio, as well as for a representative equally-weighted portfolio, through 24-month rolling regressions of a two-factor model with two systematic risk factors: interest rate risk (TERM) and default risk (DEF). Next, in the second phase, using the estimated betas, I try to understand what portion of the risk exposure is in excess of the benchmark (active risk) and how it relates to monetary policy proxies (one-month rate, Taylor residual, real rate and first principal component of a cross-section of government yields and rates). Using this methodology, I provide empirical evidence that Portuguese fixed-income mutual funds respond to accommodative monetary policy by significantly increasing exposure, in excess of their benchmarks, to default risk rate and slightly to interest risk rate as well. I also find that the increase in funds’ risk exposure to gain a boost in return (search-for-yield) is more pronounced following the 2007-2009 global financial crisis, indicating that the current historic low interest rates may incentivize excessive risk taking. My results suggest that monetary policy affects the risk appetite of non-bank financial intermediaries.
Resumo:
This paper attempts to prove that in the years 1735 to 1755 Venice was the birthplace and cradle of Modern architectural theory, generating a major crisis in classical architecture traditionally based on the Vitruvian assumption that it imitates early wooden structures in stone or in marble. According to its rationalist critics such as the Venetian Observant Franciscan friar and architectural theorist Carlo Lodoli (1690-1761) and his nineteenth-century followers, classical architecture is singularly deceptive and not true to the nature of materials, in other words, dishonest and fallacious. This questioning did not emanate from practising architects, but from Lodoli himself– a philosopher and educator of the Venetian patriciate – who had not been trained as an architect. The roots of this crisis lay in a new approach to architecture stemming from the new rationalist philosophy of the Enlightenment age with its emphasis on reason and universal criticism.
Resumo:
Les yeux et les masques sont prévalents dans les oeuvres du peintre chinois contemporain Zeng Fanzhi (né en 1964), comme métaphore du jeu de pouvoir qui oppose les individus à l’appareil social et politique. Son oeuvre La Cène, d’après Leonard de Vinci, est un exemple frappant de cette préoccupation. Cet essai examine l’utilisation par l’artiste de cette représentation occidentale d’une crise morale (une trahison qui mène à la mort du Christ) pour exprimer la dystopie qui marque la Chine contemporaine. L’interprétation par Zeng de l’oeuvre de Vinci témoigne d’une compréhension profonde de sa signification à la Renaissance comme conflit entre le pouvoir terrestre et spirituel, auquel il surimpose la fonction du banquet dans la culture chinoise comme lieu de lutte politique. Un nihilisme détaché imprègne ce travail, à l’instar de l’interprétation métaphorique du banquet de Platon par Søren Kierkegaard, In Vino Veritas.
Resumo:
This research intends to examine if there were significant differences on the brand engagement and on the electronic word of mouth (e-WOM)1 referral intention through Facebook between Generation X and Generation Y (also called millennials). Also, this study intends to examine if there are differences in the motivations that drive these generations to interact with brands through Facebook. Results indicated that Generation Y members consumed more content on Facebook brands’ pages than Generation X. Also, they were more likely to have an e-WOM referral intention as well as being more driven by brand affiliation and opportunity seeking. Finally, currently employed individuals were found to contribute with more content than students. This study fills the gap in the literature by addressing how marketing professionals should market their brand and interact and engage with their customers, based on customers’ generational cohort.