14 resultados para hedonic property price analysis
em QSpace: Queen's University - Canada
Resumo:
This dissertation investigates the question: has financial speculation contributed to global food price volatility since the mid 2000s? I problematize the mainstream academic literature on the 2008-2011 food price spikes as being dominated by neoclassical economic perspectives and offer new conceptual and empirical insights into the relationship between financial speculation and food. Presented in three journal style manuscripts, manuscript one uses circuits of capital to conceptualize the link between financial speculators in the global north and populations in the global south. Manuscript two argues that what makes commodity index speculation (aka ‘index funds’ or index swaps) novel is that it provides institutional investors with what Clapp (2014) calls “financial distance” from the biopolitical implications of food speculation. Finally, manuscript three combines Gramsci’s concepts of hegemony and ‘the intellectual’ with the concept of performativity to investigate the ideological role that public intellectuals and the rhetorical actor the market play in the proliferation and governance of commodity index speculation. The first two manuscripts take an empirically mixed method approach by combining regression analysis with discourse analysis, while the third relies on interview data and discourse analysis. The findings show that financial speculation by index swap dealers and hedge funds did indeed significantly contribute to the price volatility of food commodities between June 2006 and December 2014. The results from the interview data affirm these findings. The discourse analysis of the interview data shows that public intellectuals and rhetorical characters such as ‘the market’ play powerful roles in shaping how food speculation is promoted, regulated and normalized. The significance of the findings is three-fold. First, the empirical findings show that a link does exist between financial speculation and food price volatility. Second, the findings indicate that the post-2008 CFTC and the Dodd-Frank reforms are unlikely to reduce financial speculation or the price volatility that it causes. Third, the findings suggest that institutional investors (such as pension funds) should think critically about how they use commodity index speculation as a way of generating financial earnings.
Resumo:
This thesis uses models of firm-heterogeneity to complete empirical analyses in economic history and agricultural economics. In Chapter 2, a theoretical model of firm heterogeneity is used to derive a statistic that summarizes the welfare gains from the introduction of a new technology. The empirical application considers the use of mechanical steam power in the Canadian manufacturing sector during the late nineteenth century. I exploit exogenous variation in geography to estimate several parameters of the model. My results indicate that the use of steam power resulted in a 15.1 percent increase in firm-level productivity and a 3.0-5.2 percent increase in aggregate welfare. Chapter 3 considers various policy alternatives to price ceiling legislation in the market for production quotas in the dairy farming sector in Quebec. I develop a dynamic model of the demand for quotas with farmers that are heterogeneous in their marginal cost of milk production. The econometric analysis uses farm-level data and estimates a parameter of the theoretical model that is required for the counterfactual experiments. The results indicate that the price of quotas could be reduced to the ceiling price through a 4.16 percent expansion of the aggregate supply of quotas, or through moderate trade liberalization of Canadian dairy products. In Chapter 4, I study the relationship between farm-level productivity and participation in the Commercial Export Milk (CEM) program. I use a difference-in-difference research design with inverse propensity weights to test for causality between participation in the CEM program and total factor productivity (TFP). I find a positive correlation between participation in the CEM program and TFP, however I find no statistically significant evidence that the CEM program affected TFP.
Resumo:
Pipelines are one of the safest means to transport crude oil, but are not spill-free. This is of concern in North America, due to the large volumes of crude oil shipped by Canadian producers and the lengthy network of pipelines. Each pipeline crosses many rivers, supporting a wide variety of human activities, and rich aquatic life. However, there is a knowledge gap on the risks of contamination of river beds due to oil spills. This thesis addresses this knowledge gap by focussing on mechanisms that transport water (and contaminants) from the free surface flow to the bed sediments, and vice-versa. The work focuses on gravel rivers, in which bed sediments are sufficiently permeable that pressure gradients caused by the interactions of flow with topographic elements (gravel bars), or changes in direction induce exchanges of water between the free surface flow and the bed, known as hyporheic flows. The objectives of the thesis are: to present a new method to visualize and quantify hyporheic flows in laboratory experiments; to conduct a novel series of experiments on hyporheic flow induced by a gravel bar under different free surface flows. The new method to quantify hyporheic flows rests on injections of a solution of dye and water. The method yielded accurate flow lines, and reasonable estimates of the hyporheic flow velocities. The present series of experiments was carried out in a 11 m long, 0.39 m wide, and 0.41 m deep tilting flume. The gravel had a mean particle size of 7.7 mm. Different free surface flows were imposed by changing the flume slope and flow depth. Measured hyporheic flows were turbulent. Smaller free surface flow depths resulted in stronger hyporheic flows (higher velocities, and deeper dye penetration into the sediment). A significant finding is that different free surface flows (different velocities, Reynolds number, etc.) produce similar hyporheic flows as long as the downstream hydraulic gradients are similar. This suggests, that for a specified bar geometry, the characteristics of the hyporheic flows depend on the downstream hydraulic gradients, and not or only minimally on the internal dynamics of the free surface flow.
Resumo:
This thesis examines two ongoing development projects that received financial support from international development organizations, and an alternative mining tax proposed by the academia. Chapter 2 explores the impact of commoditization of coffee on its export price in Ethiopia. The first part of the chapter traces how the Ethiopian’s current coffee trade system and commoditization come to be. Using regression analysis, the second part tests and confirms the hypothesis that commoditization has led to a reduction in coffee export price. Chapter 3 conducts a cost-benefit analysis on a controversial, liquefied natural gas export project in Peru that sought to export one-third of the country’s proven natural gas reserves. While the country can receive royalty and corporate income tax in the short and medium term, these benefits are dwarfed by the future costs of paying for alternative energy after gas depletion. The conclusion is robust for a variety of future energy-price and energy-demand scenarios. Chapter 4 quantifies through simulation the economic distortions of two common mining taxes, the royalty and ad-valorem tax, vis-à-vis the resource rent tax. The latter is put forward as a better mining tax instrument on account of its non-distortionary nature. The rent tax, however, necessitates additional administrative burdens and induces tax-avoidance behavior, both leading to a net loss of tax revenue. By quantifying the distortions of royalty and the ad-valorem tax, one can establish the maximum loss that can be incurred by the rent tax. Simulation results indicate that the distortion of the ad-valorem tax is quite modest. If implemented, the rent tax is likely to result in a greater loss. While the subject matters may appear diverse, they are united by one theme. These initiatives were endorsed and supported by authorities and development agencies in the aim of furthering economic development and efficiency, but they are unlikely to fulfill the goal. Lessons for international development can be learnt from successful stories as well as from unsuccessful ones.
Resumo:
This work outlines the theoretical advantages of multivariate methods in biomechanical data, validates the proposed methods and outlines new clinical findings relating to knee osteoarthritis that were made possible by this approach. New techniques were based on existing multivariate approaches, Partial Least Squares (PLS) and Non-negative Matrix Factorization (NMF) and validated using existing data sets. The new techniques developed, PCA-PLS-LDA (Principal Component Analysis – Partial Least Squares – Linear Discriminant Analysis), PCA-PLS-MLR (Principal Component Analysis – Partial Least Squares –Multiple Linear Regression) and Waveform Similarity (based on NMF) were developed to address the challenging characteristics of biomechanical data, variability and correlation. As a result, these new structure-seeking technique revealed new clinical findings. The first new clinical finding relates to the relationship between pain, radiographic severity and mechanics. Simultaneous analysis of pain and radiographic severity outcomes, a first in biomechanics, revealed that the knee adduction moment’s relationship to radiographic features is mediated by pain in subjects with moderate osteoarthritis. The second clinical finding was quantifying the importance of neuromuscular patterns in brace effectiveness for patients with knee osteoarthritis. I found that brace effectiveness was more related to the patient’s unbraced neuromuscular patterns than it was to mechanics, and that these neuromuscular patterns were more complicated than simply increased overall muscle activity, as previously thought.
Resumo:
For the SNO+ neutrinoless double beta decay search, various backgrounds, ranging from impurities present naturally to those produced cosmogenically, must be understood and reduced. Cosmogenic backgrounds are particularly difficult to reduce as they are continually regenerated while exposed to high energy cosmic rays. To reduce these cosmogenics as much as possible the tellurium used for the neutrinoless double beta decay search will be purified underground. An analysis of the purification factors achievable for insoluble cosmogenic impurities found a reduction factor of $>$20.4 at 50\% C.L.. During the purification process the tellurium will come into contact with ultra pure water and nitric acid. These liquids both carry some cosmogenic impurities with them that could be potentially transferred to the tellurium. A conservative limit is set at $<$18 events in the SNO+ region of interest (ROI) per year as a result of contaminants from these liquids. In addition to cosmogenics brought underground, muons can produce radioactive isotopes while the tellurium is stored underground. A study on the rate at which muons produce these backgrounds finds an additional 1 event per year. In order to load the tellurium into the detector, it will be combined with 1,2-butanediol to form an organometallic complex. The complex was found to have minimal effect on the SNO+ acrylic vessel for 154 years.
Resumo:
We present an extensive photometric catalog for 548 CALIFA galaxies observed as of the summer of 2015. CALIFA is currently lacking photometry matching the scale and diversity of its spectroscopy; this work is intended to meet all photometric needs for CALIFA galaxies while also identifying best photometric practices for upcoming integral field spectroscopy surveys such as SAMI and MaNGA. This catalog comprises gri surface brightness profiles derived from Sloan Digital Sky Survey (SDSS) imaging, a variety of non-parametric quantities extracted from these pro files, and parametric models fitted to the i-band pro files (1D) and original galaxy images (2D). To compliment our photometric analysis, we contrast the relative performance of our 1D and 2D modelling approaches. The ability of each measurement to characterize the global properties of galaxies is quantitatively assessed, in the context of constructing the tightest scaling relations. Where possible, we compare our photometry with existing photometrically or spectroscopically obtained measurements from the literature. Close agreement is found with Walcher et al. (2014), the current source of basic photometry and classifications of CALIFA galaxies, while comparisons with spectroscopically derived quantities reveals the effect of CALIFA's limited field of view compared to broadband imaging surveys such as the SDSS. The colour-magnitude diagram, star formation main sequence, and Tully-Fisher relation of CALIFA galaxies are studied, to give a small example of the investigations possible with this rich catalog. We conclude with a discussion of points of concern for ongoing integral field spectroscopy surveys and directions for future expansion and exploitation of this work.
Resumo:
The first objective of this research was to develop closed-form and numerical probabilistic methods of analysis that can be applied to otherwise conventional methods of unreinforced and geosynthetic reinforced slopes and walls. These probabilistic methods explicitly include random variability of soil and reinforcement, spatial variability of the soil, and cross-correlation between soil input parameters on probability of failure. The quantitative impact of simultaneously considering the influence of random and/or spatial variability in soil properties in combination with cross-correlation in soil properties is investigated for the first time in the research literature. Depending on the magnitude of these statistical descriptors, margins of safety based on conventional notions of safety may be very different from margins of safety expressed in terms of probability of failure (or reliability index). The thesis work also shows that intuitive notions of margin of safety using conventional factor of safety and probability of failure can be brought into alignment when cross-correlation between soil properties is considered in a rigorous manner. The second objective of this thesis work was to develop a general closed-form solution to compute the true probability of failure (or reliability index) of a simple linear limit state function with one load term and one resistance term expressed first in general probabilistic terms and then migrated to a LRFD format for the purpose of LRFD calibration. The formulation considers contributions to probability of failure due to model type, uncertainty in bias values, bias dependencies, uncertainty in estimates of nominal values for correlated and uncorrelated load and resistance terms, and average margin of safety expressed as the operational factor of safety (OFS). Bias is defined as the ratio of measured to predicted value. Parametric analyses were carried out to show that ignoring possible correlations between random variables can lead to conservative (safe) values of resistance factor in some cases and in other cases to non-conservative (unsafe) values. Example LRFD calibrations were carried out using different load and resistance models for the pullout internal stability limit state of steel strip and geosynthetic reinforced soil walls together with matching bias data reported in the literature.
Resumo:
This research is an examination into the ways online abuse functions in certain online spaces. By analyzing text-based online abuse against women who are content creators, this research maps how aspects of violence against women offline extends online. This research examines three different explorations into how online abuse against women functions. Chapter two considers what online abuse against women looks like on Twitter as a case study. This chapter contends that online abuse can be understood as an unintentional use of Twitter’s design. Chapter three focuses specifically on the textual descriptions of sexual violence women who are journalists receive online. Chapter four analyzes Gamergate, an online movement that specifically looks to organize online abuse towards women. Chapter five concludes by meditating on the need to look at a bigger picture that includes cultural shifts that dismantle the normalization of violence against women both on and offline.
Resumo:
This project is about Fast and Female, a community-based girls’ sport organization, that focuses on empowering girls through sport. In this thesis I produce a discourse analysis from interviews with six expert sportswomen and a textual analysis of the organization’s online content – including its social media pages. I ground my analysis in poststructural theory as explained by Chris Weedon (1997) and in literature that helps contextualize and better define empowerment (Collins, 2000; Cruikshank, 1999; Hains, 2012; Sharma, 2008; Simon, 1994) and neoliberalism (Silk & Andrews, 2012). My analysis in this project suggests that Fast and Female develops a community through online and in-person interaction. This community is focused on girls’ sport and empowerment, but, as the organization is situated in a neoliberal context, organizers must take extra consideration in order for the organization to develop a girls’ sport culture that is truly representative of the desires and needs of the participants rather than implicit neoliberal values. It is important to note that Fast and Female does not identify as a feminist organization. Through this thesis I argue that Fast and Female teaches girls that sport is empowering – but, while the organization draws on “empowerment,” a term often used by feminists, it promotes a notion of empowerment that teaches female athletes how to exist within current mainstream and sporting cultures, rather than encouraging them to be empowered female citizens who learn to question and challenge social inequity. I conclude my thesis with suggestions for Fast and Female to encourage empowerment in spite of the current neoliberal situation. I also offer a goal-setting workbook that I developed to encourage girls to set goals while thinking about their communities rather than just themselves.
Resumo:
Aberrant behavior of biological signaling pathways has been implicated in diseases such as cancers. Therapies have been developed to target proteins in these networks in the hope of curing the illness or bringing about remission. However, identifying targets for drug inhibition that exhibit good therapeutic index has proven to be challenging since signaling pathways have a large number of components and many interconnections such as feedback, crosstalk, and divergence. Unfortunately, some characteristics of these pathways such as redundancy, feedback, and drug resistance reduce the efficacy of single drug target therapy and necessitate the employment of more than one drug to target multiple nodes in the system. However, choosing multiple targets with high therapeutic index poses more challenges since the combinatorial search space could be huge. To cope with the complexity of these systems, computational tools such as ordinary differential equations have been used to successfully model some of these pathways. Regrettably, for building these models, experimentally-measured initial concentrations of the components and rates of reactions are needed which are difficult to obtain, and in very large networks, they may not be available at the moment. Fortunately, there exist other modeling tools, though not as powerful as ordinary differential equations, which do not need the rates and initial conditions to model signaling pathways. Petri net and graph theory are among these tools. In this thesis, we introduce a methodology based on Petri net siphon analysis and graph network centrality measures for identifying prospective targets for single and multiple drug therapies. In this methodology, first, potential targets are identified in the Petri net model of a signaling pathway using siphon analysis. Then, the graph-theoretic centrality measures are employed to prioritize the candidate targets. Also, an algorithm is developed to check whether the candidate targets are able to disable the intended outputs in the graph model of the system or not. We implement structural and dynamical models of ErbB1-Ras-MAPK pathways and use them to assess and evaluate this methodology. The identified drug-targets, single and multiple, correspond to clinically relevant drugs. Overall, the results suggest that this methodology, using siphons and centrality measures, shows promise in identifying and ranking drugs. Since this methodology only uses the structural information of the signaling pathways and does not need initial conditions and dynamical rates, it can be utilized in larger networks.
Resumo:
One of the global phenomena with threats to environmental health and safety is artisanal mining. There are ambiguities in the manner in which an ore-processing facility operates which hinders the mining capacity of these miners in Ghana. These problems are reviewed on the basis of current socio-economic, health and safety, environmental, and use of rudimentary technologies which limits fair-trade deals to miners. This research sought to use an established data-driven, geographic information (GIS)-based system employing the spatial analysis approach for locating a centralized processing facility within the Wassa Amenfi-Prestea Mining Area (WAPMA) in the Western region of Ghana. A spatial analysis technique that utilizes ModelBuilder within the ArcGIS geoprocessing environment through suitability modeling will systematically and simultaneously analyze a geographical dataset of selected criteria. The spatial overlay analysis methodology and the multi-criteria decision analysis approach were selected to identify the most preferred locations to site a processing facility. For an optimal site selection, seven major criteria including proximity to settlements, water resources, artisanal mining sites, roads, railways, tectonic zones, and slopes were considered to establish a suitable location for a processing facility. Site characterizations and environmental considerations, incorporating identified constraints such as proximity to large scale mines, forest reserves and state lands to site an appropriate position were selected. The analysis was limited to criteria that were selected and relevant to the area under investigation. Saaty’s analytical hierarchy process was utilized to derive relative importance weights of the criteria and then a weighted linear combination technique was applied to combine the factors for determination of the degree of potential site suitability. The final map output indicates estimated potential sites identified for the establishment of a facility centre. The results obtained provide intuitive areas suitable for consideration
Resumo:
Modern software applications are becoming more dependent on database management systems (DBMSs). DBMSs are usually used as black boxes by software developers. For example, Object-Relational Mapping (ORM) is one of the most popular database abstraction approaches that developers use nowadays. Using ORM, objects in Object-Oriented languages are mapped to records in the database, and object manipulations are automatically translated to SQL queries. As a result of such conceptual abstraction, developers do not need deep knowledge of databases; however, all too often this abstraction leads to inefficient and incorrect database access code. Thus, this thesis proposes a series of approaches to improve the performance of database-centric software applications that are implemented using ORM. Our approaches focus on troubleshooting and detecting inefficient (i.e., performance problems) database accesses in the source code, and we rank the detected problems based on their severity. We first conduct an empirical study on the maintenance of ORM code in both open source and industrial applications. We find that ORM performance-related configurations are rarely tuned in practice, and there is a need for tools that can help improve/tune the performance of ORM-based applications. Thus, we propose approaches along two dimensions to help developers improve the performance of ORM-based applications: 1) helping developers write more performant ORM code; and 2) helping developers configure ORM configurations. To provide tooling support to developers, we first propose static analysis approaches to detect performance anti-patterns in the source code. We automatically rank the detected anti-pattern instances according to their performance impacts. Our study finds that by resolving the detected anti-patterns, the application performance can be improved by 34% on average. We then discuss our experience and lessons learned when integrating our anti-pattern detection tool into industrial practice. We hope our experience can help improve the industrial adoption of future research tools. However, as static analysis approaches are prone to false positives and lack runtime information, we also propose dynamic analysis approaches to further help developers improve the performance of their database access code. We propose automated approaches to detect redundant data access anti-patterns in the database access code, and our study finds that resolving such redundant data access anti-patterns can improve application performance by an average of 17%. Finally, we propose an automated approach to tune performance-related ORM configurations using both static and dynamic analysis. Our study shows that our approach can help improve application throughput by 27--138%. Through our case studies on real-world applications, we show that all of our proposed approaches can provide valuable support to developers and help improve application performance significantly.
Resumo:
An investigation into karst hazard in southern Ontario has been undertaken with the intention of leading to the development of predictive karst models for this region. The reason these are not currently feasible is a lack of sufficient karst data, though this is not entirely due to the lack of karst features. Geophysical data was collected at Lake on the Mountain, Ontario as part of this karst investigation. This data was collected in order to validate the long-standing hypothesis that Lake on the Mountain was formed from a sinkhole collapse. Sub-bottom acoustic profiling data was collected in order to image the lake bottom sediments and bedrock. Vertical bedrock features interpreted as solutionally enlarged fractures were taken as evidence for karst processes on the lake bottom. Additionally, the bedrock topography shows a narrower and more elongated basin than was previously identified, and this also lies parallel to a mapped fault system in the area. This suggests that Lake on the Mountain was formed over a fault zone which also supports the sinkhole hypothesis as it would provide groundwater pathways for karst dissolution to occur. Previous sediment cores suggest that Lake on the Mountain would have formed at some point during the Wisconsinan glaciation with glacial meltwater and glacial loading as potential contributing factors to sinkhole development. A probabilistic karst model for the state of Kentucky, USA, has been generated using the Weights of Evidence method. This model is presented as an example of the predictive capabilities of these kind of data-driven modelling techniques and to show how such models could be applied to karst in Ontario. The model was able to classify 70% of the validation dataset correctly while minimizing false positive identifications. This is moderately successful and could stand to be improved. Finally, suggestions to improving the current karst model of southern Ontario are suggested with the goal of increasing investigation into karst in Ontario and streamlining the reporting system for sinkholes, caves, and other karst features so as to improve the current Ontario karst database.