435 resultados para Kingston


Relevância:

10.00% 10.00%

Publicador:

Resumo:

Electron beam lithography (EBL) and focused ion beam (FIB) methods were developed in house to fabricate nanocrystalline nickel micro/nanopillars so to compare the effect of fabrication on plastic yielding. EBL was used to fabricate 3 μm and 5 μm thick poly-methyl methacrylate patterned substrates in which nickel pillars were grown by electroplating with height to diameter aspect ratios from 2:1 to 5:1. FIB milling was used to reduce larger grown pillars to sizes similar to EBL grown pillars. X-ray diffraction, electron back-scatter diffraction, scanning electron microscopy, and FIB imaging were used to characterize the nickel pillars. The measured grain size of the pillars was 91±23 nm, with strong <110> and weaker <111> and <110> crystallographic texture in the growth. Load-controlled compression tests were conducted using a MicroMaterials nano-indenter equipped with a 10 μm flat punch at constant rates from 0.0015 to 0.03 mN/s on EBL grown pillars, and 0.0015 and 0.015 mN/s on FIB-milled pillars. The measured Young’s modulus ranged from 55 to 350 GPa for all pillars, agreeing with values in the literature. EBL grown pillars exhibited stochastic strain-bursts at slow loading rates, attributed to local micro yield events, followed by work hardening. Sharp yield points were also observed and attributed to the gold seed layer de-bonding between the nickel pillar and substrate due to the shear stress associated with end effects that arise from the substrate constraint. The onset of yield ranged from 108 to 1800 MPa, which is greater than bulk nickel, but within values given in the literature. FIB-milled pillars demonstrated stochastic yield behaviour at all loading rates tested, yielding between 320 and 625 MPa. Deformation was apparent at FIB-milled pillar tops, where the smallest cross-sectional area was measured, but still exhibited superior yield strength to bulk nickel. The gallium damage at the outer surface of the pillars likely aids in dislocation nucleation and plasticity, leading to lower yield strengths than for the EBL pillars. Thermal drift, substrate effects, and noise due to vibrations within the indenter system contributed to variance and inconsistency in the data.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

An investigation into karst hazard in southern Ontario has been undertaken with the intention of leading to the development of predictive karst models for this region. The reason these are not currently feasible is a lack of sufficient karst data, though this is not entirely due to the lack of karst features. Geophysical data was collected at Lake on the Mountain, Ontario as part of this karst investigation. This data was collected in order to validate the long-standing hypothesis that Lake on the Mountain was formed from a sinkhole collapse. Sub-bottom acoustic profiling data was collected in order to image the lake bottom sediments and bedrock. Vertical bedrock features interpreted as solutionally enlarged fractures were taken as evidence for karst processes on the lake bottom. Additionally, the bedrock topography shows a narrower and more elongated basin than was previously identified, and this also lies parallel to a mapped fault system in the area. This suggests that Lake on the Mountain was formed over a fault zone which also supports the sinkhole hypothesis as it would provide groundwater pathways for karst dissolution to occur. Previous sediment cores suggest that Lake on the Mountain would have formed at some point during the Wisconsinan glaciation with glacial meltwater and glacial loading as potential contributing factors to sinkhole development. A probabilistic karst model for the state of Kentucky, USA, has been generated using the Weights of Evidence method. This model is presented as an example of the predictive capabilities of these kind of data-driven modelling techniques and to show how such models could be applied to karst in Ontario. The model was able to classify 70% of the validation dataset correctly while minimizing false positive identifications. This is moderately successful and could stand to be improved. Finally, suggestions to improving the current karst model of southern Ontario are suggested with the goal of increasing investigation into karst in Ontario and streamlining the reporting system for sinkholes, caves, and other karst features so as to improve the current Ontario karst database.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Larger lineups could protect innocent suspects from being misidentified; however, they can also decrease correct identifications. Bertrand (2006) investigated whether the decrease in correct identifications could be prevented by adding more cues, in the form of additional views of lineup members’ faces, to the lineup. Adding these cues was successful to an extent. The current series of studies attempted to replicate Bertrand’s (2006) findings while addressing some methodological issues—namely, the inconsistency in image size as lineup size increased. First, I investigated whether image size could affect face recognition (Chapter 2) and found it could, but that it also affected previously-seen (“old”) versus previously-unseen (“new”) faces differently. Specifically, smaller image sizes at exposure lowered accuracy for old faces, while these same image sizes at recognition lowered accuracy for new faces. Although these results indicate that target recognition would be unaffected by image size at recognition (i.e., during a lineup), lineups are also comprised of previously-unseen faces, in the form of fillers and innocent suspects. Because image size could affect lineup decisions, as it could become more difficult to realize fillers are previously-unseen, I decided to replicate Bertrand (2006) while keeping image size constant in Chapters 3 (simultaneous lineups) and 4 (simultaneous-presentation, sequential decisions). In both Chapters, the integral findings were the same: correct identification rates decreased as lineup size increased from 6- to 24-person lineups, but adding cues had no effect. The inability to replicate Bertrand (2006) could mean that the original finding was due to chance, but alternate explanations also exist, such as the overall size of the array, the degree to which additional cues overlap, and the length of the target exposure. These alternate explanations, along with directions for future research, are discussed in the following Chapters.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Sensors for real-time monitoring of environmental contaminants are essential for protecting ecosystems and human health. Refractive index sensing is a non-selective technique that can be used to measure almost any analyte. Miniaturized refractive index sensors, such as silicon-on-insulator (SOI) microring resonators are one possible platform, but require coatings selective to the analytes of interest. A homemade prism refractometer is reported and used to characterize the interactions between polymer films and liquid or vapour-phase analytes. A camera was used to capture both Fresnel reflection and total internal reflection within the prism. For thin-films (d = 10 μm - 100 μm), interference fringes were also observed. Fourier analysis of the interferogram allowed for simultaneous extraction of the average refractive index and film thickness with accuracies of ∆n = 1-7 ×10-4 and ∆d < 3-5%. The refractive indices of 29 common organic solvents as well as aqueous solutions of sodium chloride, sucrose, ethylene glycol, glycerol, and dimethylsulfoxide were measured at λ = 1550 nm. These measurements will be useful for future calibrations of near-infrared refractive index sensors. A mathematical model is presented, where the concentration of analyte adsorbed in a film can be calculated from the refractive index and thickness changes during uptake. This model can be used with Fickian diffusion models to measure the diffusion coefficients through the bulk film and at the film-substrate interface. The diffusion of water and other organic solvents into SU-8 epoxy was explored using refractometry and the diffusion coefficient of water into SU-8 is presented. Exposure of soft baked SU-8 films to acetone, acetonitrile and methanol resulted in rapid delamination. The diffusion of volatile organic compound (VOC) vapours into polydimethylsiloxane and polydimethyl-co-polydiphenylsiloxane polymers was also studied using refractometry. Diffusion and partition coefficients are reported for several analytes. As a model system, polydimethyl-co-diphenylsiloxane films were coated onto SOI microring resonators. After the development of data acquisition software, coated devices were exposed to VOCs and the refractive index response was assessed. More studies with other polymers are required to test the viability of this platform for environmental sensing applications.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Background and Objectives: Mobility limitations are a prevalent issue in older adult populations, and an important determinant of disability and mortality. Neighborhood conditions are key determinants of mobility and perception of safety may be one such determinant. Women have more mobility limitations than men, a phenomenon known as the gender mobility gap. The objective of this work was to validate a measure of perception of safety, examine the relationship between neighborhood perception of safety and mobility limitations in seniors, and explore if these effects vary by gender. Methods: This study was cross-sectional, using questionnaire data collected from community-dwelling older adults from four sites in Canada, Colombia, and Brazil. The exposure variable was the neighborhood aggregated Perception of Safety (PoS) scale, derived from the Physical and Social Disorder (PSD) scale by Sampson and Raudenbush. Its construct validity was verified using factor analyses and correlation with similar measures. The Mobility Assessment Tool – short form (MAT-sf), a video-based measure validated cross-culturally in the studied populations, was used to assess mobility limitations. Based on theoretical models, covariates were included in the analysis, both at the neighborhood level (SES, social capital, and built environment) and the individual level (age, gender, education, income, chronic illnesses, depression, cognitive function, BMI, and social participation). Multilevel modeling was used in order to account for neighborhood clustering. Gender specific analyses were carried out. SAS and M-plus were used in this study. Results: PoS was validated across all sites. It loaded in a single factor, after excluding two items, with a Cronbach α value of approximately 0.86. Mobility limitations were present in 22.08% of the sample, 16.32% among men and 27.41% among women. Neighborhood perception of safety was significantly associated with mobility limitations when controlling for all covariates, with an OR of 0.84 (CI 95%: 0.73-0.96), indicating lower odds of having mobility limitations as neighborhood perception of safety improves. Gender did not affect this relationship despite women being more likely to have mobility limitations and live in neighborhoods with poor perception of safety. Conclusion: Neighborhood perception of safety affected the prevalence of mobility limitations in older adults in the studied population.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Two novel studies examining the capacity and characteristics of working memory for object weights, experienced through lifting, were completed. Both studies employed visually identical objects of varying weight and focused on memories linking object locations and weights. Whereas numerous studies have examined the capacity of visual working memory, the capacity of sensorimotor memory involved in motor control and object manipulation has not yet been explored. In addition to assessing working memory for object weights using an explicit perceptual test, we also assessed memory for weight using an implicit measure based on motor performance. The vertical lifting or LF and the horizontal GF applied during lifts, measured from force sensors embedded in the object handles, were used to assess participants’ ability to predict object weights. In Experiment 1, participants were presented with sets of 3, 4, 5, 7 or 9 objects. They lifted each object in the set and then repeated this procedure 10 times with the objects lifted either in a fixed or random order. Sensorimotor memory was examined by assessing, as a function of object set size, how lifting forces changed across successive lifts of a given object. The results indicated that force scaling for weight improved across the repetitions of lifts, and was better for smaller set sizes when compared to the larger set sizes, with the latter effect being clearest when objects were lifting in a random order. However, in general the observed force scaling was poorly scaled. In Experiment 2, working memory was examined in two ways: by determining participants’ ability to detect a change in the weight of one of 3 to 6 objects lifted twice, and by simultaneously measuring the fingertip forces applied when lifting the objects. The results showed that, even when presented with 6 objects, participants were extremely accurate in explicitly detecting which object changed weight. In addition, force scaling for object weight, which was generally quite weak, was similar across set sizes. Thus, a capacity limit less than 6 was not found for either the explicit or implicit measures collected.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Spectral unmixing (SU) is a technique to characterize mixed pixels of the hyperspectral images measured by remote sensors. Most of the existing spectral unmixing algorithms are developed using the linear mixing models. Since the number of endmembers/materials present at each mixed pixel is normally scanty compared with the number of total endmembers (the dimension of spectral library), the problem becomes sparse. This thesis introduces sparse hyperspectral unmixing methods for the linear mixing model through two different scenarios. In the first scenario, the library of spectral signatures is assumed to be known and the main problem is to find the minimum number of endmembers under a reasonable small approximation error. Mathematically, the corresponding problem is called the $\ell_0$-norm problem which is NP-hard problem. Our main study for the first part of thesis is to find more accurate and reliable approximations of $\ell_0$-norm term and propose sparse unmixing methods via such approximations. The resulting methods are shown considerable improvements to reconstruct the fractional abundances of endmembers in comparison with state-of-the-art methods such as having lower reconstruction errors. In the second part of the thesis, the first scenario (i.e., dictionary-aided semiblind unmixing scheme) will be generalized as the blind unmixing scenario that the library of spectral signatures is also estimated. We apply the nonnegative matrix factorization (NMF) method for proposing new unmixing methods due to its noticeable supports such as considering the nonnegativity constraints of two decomposed matrices. Furthermore, we introduce new cost functions through some statistical and physical features of spectral signatures of materials (SSoM) and hyperspectral pixels such as the collaborative property of hyperspectral pixels and the mathematical representation of the concentrated energy of SSoM for the first few subbands. Finally, we introduce sparse unmixing methods for the blind scenario and evaluate the efficiency of the proposed methods via simulations over synthetic and real hyperspectral data sets. The results illustrate considerable enhancements to estimate the spectral library of materials and their fractional abundances such as smaller values of spectral angle distance (SAD) and abundance angle distance (AAD) as well.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

To study the dissipation of heat generated due to the formation of pinholes that cause local hotspots in the catalyst layer of the Polymer Electrolyte Fuel Cell, a two-phase non-isothermal model has been developed by coupling Darcy’s law with heat transport. The domain under consideration is a section of the membrane electrode assembly with a half-channel and a half-rib. Five potential locations where a pinhole might form were analyzed: at the midplane of the channel, midway between the channel midplane and the channel wall, at the channel or rib wall, midway between the rib midplane and the channel wall, at the midplane of the rib. In the first part of this work, a preliminary thermal model was developed. The model was then refined to account for the two-phase effects. A sensitivity study was done to evaluate the effect of the following properties on the maximum temperature in the domain: Catalyst layer thermal conductivity, the Microporous layer thermal conductivity, the anisotropy factor of the Catalyst layer thermal conductivity, the Porous transport layer porosity, the liquid water distribution and the thickness of the membrane and porous layers. Accounting for the two-phase effects, a slight cooling effect was observed across all hotspot locations. The thermal properties of the catalyst layer were shown to have a limited impact on the maximum temperature in the catalyst layer of new fuel cells without pinhole. However, as hotspots start to appear, thermal properties play a more significant role in mitigating the thermal runaway.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The thesis focuses on a central theme of the epidemiology and health economics of ankle sprains to inform health policy and the provision of health services. It describes the burden, prognosis, resource utilization, and costs attributed to these injuries. The first manuscript systematically reviewed 34 studies on the direct and indirect costs of treating ankle and foot injuries. The overall costs per patient ranged from $2,075- $3,799 (2014 USD) for ankle sprains; $290-$20,132 for ankle fractures; and $6,345-$45,731 for foot fractures, reflecting differences in injury severity, treatment methods, and study characteristics. The second manuscript provided an epidemiological and economic profile of non-fracture ankle and foot injuries in Ontario using linked databases from the Institute for Clinical Evaluative Sciences. The incidence rate of ankle sprains was 16.9/1,000 person-years. Annually, ankle and foot injuries cost $21,685,876 (2015 CAD). The mean expenses per case were $99.98 (95% CI, $99.70-100.26) for any injury. Costs ranged from $133.78-$210.75 for ankle sprains and $1,497.12-$1,755.69 for dislocations. The third manuscript explored the impact of body mass index on recovery from medically attended grade 1 and 2 ankle sprains using the Foot and Ankle Outcome Score. Data came from a randomized controlled trial of a physiotherapy intervention in Kingston, Ontario. At six months, the odds ratio of recovery for participants with obesity was 0.60 (0.37-0.97) before adjustment and 0.74 (0.43-1.29) after adjustment compared to non-overweight participants. The fourth manuscript used trial data to examine the health-related quality of life among ankle sprain patients using the Health Utilities Index version 3 (HUI-3). The greatest improvements in scores were seen at one month post-injury (HUI-3: 0.88, 95% CI: 0.86-0.90). Individuals with grade 2 sprains had significantly lower ambulation scores than those with grade 1 sprains (0.70 vs. 0.84; p<0.05). The final manuscript used trial data to describe the financial burden (direct and indirect costs) of ankle sprains. The overall mean costs were $1,508 (SD: $1,452) at one month and increased to $2,206 (SD: $3,419) at six months. Individuals with more severe injuries at baseline had significantly higher (p<0.001) costs compared to individuals with less severe injuries, after controlling for confounders.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Understanding the reasons for long-term population change in a species requires an evaluation of ecological variables that may account for the observed dynamics. In this study, long-term changes in indices of Smallmouth Bass condition and population levels were examined for eastern Lake Ontario and the Bay of Quinte. Smallmouth Bass are an extremely important recreational fish species native to Lake Ontario. They have experienced numerous changes in their environment through direct human impacts, climate change, predation, and habitat sharing with non-native species. Smallmouth Bass have experienced an increase in body length and weight likely due to a diet shift from crayfish to predominantly Round Gobies which has allowed them to increase their growth rate. According to existing assessment data however, this increase in body size has not been associated with an increase in abundance. Long-term data from gill net sampling shows that Smallmouth Bass populations have been declining since the late 1980s with no indication of recovery. This could be due to a variety of factors, but it is most likely due to a change in the selectivity of gill nets because of the change in body size as well as a habitat shift away from gill net sampling sites. Adjusting for gill net selectivity has revealed that sub-adult bass abundance is currently greater than it was historically, and that very large bass are likely not being retained within the gill nets that are currently used. The use of a long-term data set in this study has led to a much better understanding of Smallmouth Bass abundance and ecology.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Genetic and environmental factors interact to influence vulnerability for internalizing psychopathology, including Major Depressive Disorder (MDD). The mechanisms that account for how environmental stress can alter biological systems are not yet well understood yet are critical to develop more accurate models of vulnerability and targeted interventions. Epigenetic influences, and more specifically, DNA methylation, may provide a mechanism by which stress could program gene expression, thereby altering key systems implicated in depression, such as frontal-limbic circuitry and its critical role in emotion regulation. This thesis investigated the role of environmental factors from infancy and throughout the lifespan affecting the serotonergic (5-HT) system in the vulnerability to and treatment of depression and anxiety and potential underlying DNA methylation processes. First, we investigated the contributions of additive genetic vs. environmental factors on an early trait phenotype for depression (negative emotionality) in infants and their stability over time in the first 2 years of life. We provided evidence of the substantial contributions of both genetic and shared environmental factors to this trait, as well as genetically- and environmentally- mediated stability and innovation. Second, we studied how childhood environmental stress is associated with peripheral DNA methylation of the serotonin transporter gene, SLC6A4, as well as long-term trajectories of internalizing behaviours. There was a relationship between childhood psychosocial adversity and SLC6A4 methylation in males, as well as between SLC6A4 methylation and internalizing trajectory in both sexes. Third, we investigated changes in emotion processing and epigenetic modification of the SLC6A4 gene in depressed adolescents before and after Mindfulness-Based Cognitive Therapy (MBCT). The alterations from pre- to post-treatment in connectivity between the ACC and other network regions and SLC6A4 methylation suggested that MBCT may work to optimize the connectivity of brain networks involved in cognitive control of emotion as well as also normalize the relationship between SLC6A4 methylation and activation patterns in frontal-limbic circuitry. Our results from these three studies strengthen the theory that environmental influences are critical in establishing early vulnerability factors for MDD, driving epigenetic processes, and altering brain processes as an individual undergoes treatment, or experiences relapse.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In geotechnical engineering, the stability of rock excavations and walls is estimated by using tools that include a map of the orientations of exposed rock faces. However, measuring these orientations by using conventional methods can be time consuming, sometimes dangerous, and is limited to regions of the exposed rock that are reachable by a human. This thesis introduces a 2D, simulated, quadcopter-based rock wall mapping algorithm for GPS denied environments such as underground mines or near high walls on surface. The proposed algorithm employs techniques from the field of robotics known as simultaneous localization and mapping (SLAM) and is a step towards 3D rock wall mapping. Not only are quadcopters agile, but they can hover. This is very useful for confined spaces such as underground or near rock walls. The quadcopter requires sensors to enable self localization and mapping in dark, confined and GPS denied environments. However, these sensors are limited by the quadcopter payload and power restrictions. Because of these restrictions, a light weight 2D laser scanner is proposed. As a first step towards a 3D mapping algorithm, this thesis proposes a simplified scenario in which a simulated 1D laser range finder and 2D IMU are mounted on a quadcopter that is moving on a plane. Because the 1D laser does not provide enough information to map the 2D world from a single measurement, many measurements are combined over the trajectory of the quadcopter. Least Squares Optimization (LSO) is used to optimize the estimated trajectory and rock face for all data collected over the length of a light. Simulation results show that the mapping algorithm developed is a good first step. It shows that by combining measurements over a trajectory, the scanned rock face can be estimated using a lower-dimensional range sensor. A swathing manoeuvre is introduced as a way to promote loop closures within a short time period, thus reducing accumulated error. Some suggestions on how to improve the algorithm are also provided.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Quantitative methods can help us understand how underlying attributes contribute to movement patterns. Applying principal components analysis (PCA) to whole-body motion data may provide an objective data-driven method to identify unique and statistically important movement patterns. Therefore, the primary purpose of this study was to determine if athletes’ movement patterns can be differentiated based on skill level or sport played using PCA. Motion capture data from 542 athletes performing three sport-screening movements (i.e. bird-dog, drop jump, T-balance) were analyzed. A PCA-based pattern recognition technique was used to analyze the data. Prior to analyzing the effects of skill level or sport on movement patterns, methodological considerations related to motion analysis reference coordinate system were assessed. All analyses were addressed as case-studies. For the first case study, referencing motion data to a global (lab-based) coordinate system compared to a local (segment-based) coordinate system affected the ability to interpret important movement features. Furthermore, for the second case study, where the interpretability of PCs was assessed when data were referenced to a stationary versus a moving segment-based coordinate system, PCs were more interpretable when data were referenced to a stationary coordinate system for both the bird-dog and T-balance task. As a result of the findings from case study 1 and 2, only stationary segment-based coordinate systems were used in cases 3 and 4. During the bird-dog task, elite athletes had significantly lower scores compared to recreational athletes for principal component (PC) 1. For the T-balance movement, elite athletes had significantly lower scores compared to recreational athletes for PC 2. In both analyses the lower scores in elite athletes represented a greater range of motion. Finally, case study 4 reported differences in athletes’ movement patterns who competed in different sports, and significant differences in technique were detected during the bird-dog task. Through these case studies, this thesis highlights the feasibility of applying PCA as a movement pattern recognition technique in athletes. Future research can build on this proof-of-principle work to develop robust quantitative methods to help us better understand how underlying attributes (e.g. height, sex, ability, injury history, training type) contribute to performance.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The southeastern coast of South Australia contains a spectacular and world-renown suite of Quaternary calcareous aeolianites. This study is focused on the provenance of components in the Holocene sector of this carbonate breach-dune succession. Research was carried out along seven transects from ~30 meters water depth offshore across the beach and into the dunes. Offshore sediments were acquired via grab sampling and SCUBA. Results indicate that dunes of the southern Lacepede and Otway coasts in particular are mostly composed of modern invertebrate and calcareous algal allochems. The most numerous grains are from molluscs, benthic foraminifera, coralline algae, echinoids, and bryozoans. These particles originate in carbonate factories such as macroalgal forests, rocky reefs, seagrass meadows, and low-relief seafloor rockgrounds. The incorporation of carbonate skeletons into coastal dunes, however, depends on a combination of; 1) the infauna within intertidal and nearshore environments, 2) the physical characteristics of different allochems and their ability to withstand fragmentation and abrasion, 3) the wave and swell climate, and 4) the nature of aeolian transport. Most aeolian dune sediment is derived from nearshore and intertidal carbonate factories. This is particularly well illustrated by the abundance of robust infaunal bivalves that inhabit the nearshore sands and virtual absence of bryozoans that are common as sediment particles in water depths > 10mwd. Thus, the calcareous aeolianites in this cool-water carbonate region are not a reflection of the offshore marine shelf factories, but more a product of shallow nearshore-intertidal biomes.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This dissertation examines a process of indigenous accumulation among Tonga farmers in Zambia’s Southern Province. In the 1970s multiple authors concluded that capitalist farmers had emerged among Tonga agro-pastoralists, predominantly within private titled holdings. Relying on archival research, newspapers, secondary sources and extensive oral testimony this thesis fills a 35-year gap on the topic, providing insights into the social and environmental impacts of neoliberal policy among African peasants and capitalist farmers. In contrast to dominant narratives of the post-independence period, this study argues that Zambia did experience a developmental process post-independence, which saw significant achievements made in the agricultural sector, including the doubling of national cattle stocks. The data reveals a painful process of disarticulation beginning in the late 1980s. Following neoliberal adjustment, we observe significant heterogeneity in production systems, some regional specialization, and processes of migration. Most importantly, the thesis uncovers processes of overwhelming ecosystemic change that contributed to livestock epidemics of severe scale and scope. Amazingly, this went largely undocumented because of the simultaneous crisis of the state, which left the national statistics office and other state bodies incapable of functioning from the late 1980s into the 2000s. In response, the Zambian state has introduced a number of neodevelopmental initiatives in the sector, yet the lack of animal traction remained up to 2008 and agricultural production declined, while more capitalized farmers (largely white, and/or with foreign direct investment) have become more significant players in the country. This thesis provides compelling evidence to challenge dominant economic thinking of the Washington institutions as well as many of the common Marxian formulations.