903 resultados para post-dynamic recrystallization
Resumo:
THE YOUTH MOVEMENT NASHI (OURS) WAS FOUNDED IN THE SPRING of 2005 against the backdrop of Ukraine’s ‘Orange Revolution’. Its aim was to stabilise Russia’s political system and take back the streets from opposition demonstrators. Personally loyal to Putin and taking its ideological orientation from Surkov’s concept of ‘sovereign democracy’, Nashi has sought to turn the tide on ‘defeatism’ and develop Russian youth into a patriotic new elite that ‘believes in the future of Russia’ (p. 15). Combining a wealth of empirical detail and the application of insights from discourse theory, Ivo Mijnssen analyses the organisation’s development between 2005 and 2012. His analysis focuses on three key moments—the organisation’s foundation, the apogee of its mobilisation around the Bronze Soldier dispute with Estonia, and the 2010 Seliger youth camp—to help understand Nashi’s organisation, purpose and ideational outlook as well as the limitations and challenges it faces. As such,the book is insightful both for those with an interest in post-Soviet Russian youth culture, and for scholars seeking a rounded understanding of the Kremlin’s initiatives to return a sense of identity and purpose to Russian national life.The first chapter, ‘Background and Context’, outlines the conceptual toolkit provided by Ernesto Laclau and Chantal Mouffe to help make sense of developments on the terrain of identity politics. In their terms, since the collapse of the Soviet Union, Russia has experienced acute dislocation of its identity. With the tangible loss of great power status, Russian realities have become unfixed from a discourse enabling national life to be constructed, albeit inherently contingently, as meaningful. The lack of a Gramscian hegemonic discourse to provide a unifying national idea was securitised as an existential threat demanding special measures. Accordingly, the identification of those who are ‘notUs’ has been a recurrent theme of Nashi’s discourse and activity. With the victory in World War II held up as a foundational moment, a constitutive other is found in the notion of ‘unusual fascists’. This notion includes not just neo-Nazis, but reflects a chain of equivalence that expands to include a range of perceived enemies of Putin’s consolidation project such as oligarchs and pro-Western liberals.The empirical background is provided by the second chapter, ‘Russia’s Youth, the Orange Revolution, and Nashi’, which traces the emergence of Nashi amid the climate of political instability of 2004 and 2005. A particularly note-worthy aspect of Mijnssen’s work is the inclusion of citations from his interviews with Nashicommissars; the youth movement’s cadres. Although relatively few in number, such insider conversations provide insight into the ethos of Nashi’s organisation and the outlook of those who have pledged their involvement. Besides the discussion of Nashi’s manifesto, the reader thus gains insight into the motivations of some participants and behind-the-scenes details of Nashi’s activities in response to the perceived threat of anti-government protests. The third chapter, ‘Nashi’s Bronze Soldier’, charts Nashi’s role in elevating the removal of a World War II monument from downtown Tallinn into an international dispute over the interpretation of history. The events subsequent to this securitisation of memory are charted in detail, concluding that Nashi’s activities were ultimately unsuccessful as their demands received little official support.The fourth chapter, ‘Seliger: The Foundry of Modernisation’, presents a distinctive feature of Mijnssen’s study, namely his ethnographic account as a participant observer in the Youth International Forum at Seliger. In the early years of the camp (2005–2007), Russian participants received extensive training, including master classes in ‘methods of forestalling mass unrest’ (p. 131), and the camp served to foster a sense of group identity and purpose among activists. After 2009 the event was no longer officially run as a Nashi camp, and its role became that of a forum for the exchange of ideas about innovation, although camp spirit remained a central feature. In 2010 the camp welcomed international attendees for the first time. As one of about 700 international participants in that year the author provides a fascinating account based on fieldwork diaries.Despite the polemical nature of the topic, Mijnssen’s analysis remains even-handed, exemplified in his balanced assessment of the Seliger experience. While he details the frustrations and disappointments of the international participants with regard to the unaccustomed strict camp discipline, organisational and communication failures, and the controlled format of many discussions,he does not neglect to note the camp’s successes in generating a gratifying collective dynamic between the participants, even among the international attendees who spent only a week there.In addition to the useful bibliography, the book is back-ended by two appendices, which provide the reader with important Russian-language primary source materials. The first is Nashi’s ‘Unusual Fascism’ (Neobyknovennyi fashizm) brochure, and the second is the booklet entitled ‘Some Uncomfortable Questions to the Russian Authorities’ (Neskol’ko neudobnykh voprosov rossiiskoivlasti) which was provided to the Seliger 2010 instructors to guide them in responding to probing questions from foreign participants. Given that these are not readily publicly available even now, they constitute a useful resource from the historical perspective.
Resumo:
Human use of the oceans is increasingly in conflict with conservation of endangered species. Methods for managing the spatial and temporal placement of industries such as military, fishing, transportation and offshore energy, have historically been post hoc; i.e. the time and place of human activity is often already determined before assessment of environmental impacts. In this dissertation, I build robust species distribution models in two case study areas, US Atlantic (Best et al. 2012) and British Columbia (Best et al. 2015), predicting presence and abundance respectively, from scientific surveys. These models are then applied to novel decision frameworks for preemptively suggesting optimal placement of human activities in space and time to minimize ecological impacts: siting for offshore wind energy development, and routing ships to minimize risk of striking whales. Both decision frameworks relate the tradeoff between conservation risk and industry profit with synchronized variable and map views as online spatial decision support systems.
For siting offshore wind energy development (OWED) in the U.S. Atlantic (chapter 4), bird density maps are combined across species with weights of OWED sensitivity to collision and displacement and 10 km2 sites are compared against OWED profitability based on average annual wind speed at 90m hub heights and distance to transmission grid. A spatial decision support system enables toggling between the map and tradeoff plot views by site. A selected site can be inspected for sensitivity to a cetaceans throughout the year, so as to capture months of the year which minimize episodic impacts of pre-operational activities such as seismic airgun surveying and pile driving.
Routing ships to avoid whale strikes (chapter 5) can be similarly viewed as a tradeoff, but is a different problem spatially. A cumulative cost surface is generated from density surface maps and conservation status of cetaceans, before applying as a resistance surface to calculate least-cost routes between start and end locations, i.e. ports and entrance locations to study areas. Varying a multiplier to the cost surface enables calculation of multiple routes with different costs to conservation of cetaceans versus cost to transportation industry, measured as distance. Similar to the siting chapter, a spatial decisions support system enables toggling between the map and tradeoff plot view of proposed routes. The user can also input arbitrary start and end locations to calculate the tradeoff on the fly.
Essential to the input of these decision frameworks are distributions of the species. The two preceding chapters comprise species distribution models from two case study areas, U.S. Atlantic (chapter 2) and British Columbia (chapter 3), predicting presence and density, respectively. Although density is preferred to estimate potential biological removal, per Marine Mammal Protection Act requirements in the U.S., all the necessary parameters, especially distance and angle of observation, are less readily available across publicly mined datasets.
In the case of predicting cetacean presence in the U.S. Atlantic (chapter 2), I extracted datasets from the online OBIS-SEAMAP geo-database, and integrated scientific surveys conducted by ship (n=36) and aircraft (n=16), weighting a Generalized Additive Model by minutes surveyed within space-time grid cells to harmonize effort between the two survey platforms. For each of 16 cetacean species guilds, I predicted the probability of occurrence from static environmental variables (water depth, distance to shore, distance to continental shelf break) and time-varying conditions (monthly sea-surface temperature). To generate maps of presence vs. absence, Receiver Operator Characteristic (ROC) curves were used to define the optimal threshold that minimizes false positive and false negative error rates. I integrated model outputs, including tables (species in guilds, input surveys) and plots (fit of environmental variables, ROC curve), into an online spatial decision support system, allowing for easy navigation of models by taxon, region, season, and data provider.
For predicting cetacean density within the inner waters of British Columbia (chapter 3), I calculated density from systematic, line-transect marine mammal surveys over multiple years and seasons (summer 2004, 2005, 2008, and spring/autumn 2007) conducted by Raincoast Conservation Foundation. Abundance estimates were calculated using two different methods: Conventional Distance Sampling (CDS) and Density Surface Modelling (DSM). CDS generates a single density estimate for each stratum, whereas DSM explicitly models spatial variation and offers potential for greater precision by incorporating environmental predictors. Although DSM yields a more relevant product for the purposes of marine spatial planning, CDS has proven to be useful in cases where there are fewer observations available for seasonal and inter-annual comparison, particularly for the scarcely observed elephant seal. Abundance estimates are provided on a stratum-specific basis. Steller sea lions and harbour seals are further differentiated by ‘hauled out’ and ‘in water’. This analysis updates previous estimates (Williams & Thomas 2007) by including additional years of effort, providing greater spatial precision with the DSM method over CDS, novel reporting for spring and autumn seasons (rather than summer alone), and providing new abundance estimates for Steller sea lion and northern elephant seal. In addition to providing a baseline of marine mammal abundance and distribution, against which future changes can be compared, this information offers the opportunity to assess the risks posed to marine mammals by existing and emerging threats, such as fisheries bycatch, ship strikes, and increased oil spill and ocean noise issues associated with increases of container ship and oil tanker traffic in British Columbia’s continental shelf waters.
Starting with marine animal observations at specific coordinates and times, I combine these data with environmental data, often satellite derived, to produce seascape predictions generalizable in space and time. These habitat-based models enable prediction of encounter rates and, in the case of density surface models, abundance that can then be applied to management scenarios. Specific human activities, OWED and shipping, are then compared within a tradeoff decision support framework, enabling interchangeable map and tradeoff plot views. These products make complex processes transparent for gaming conservation, industry and stakeholders towards optimal marine spatial management, fundamental to the tenets of marine spatial planning, ecosystem-based management and dynamic ocean management.
Resumo:
Germanium was of great interest in the 1950’s when it was used for the first transistor device. However, due to the water soluble and unstable oxide it was surpassed by silicon. Today, as device dimensions are shrinking the silicon oxide is no longer suitable due to gate leakage and other low-κ dielectrics such as Al2O3 and HfO2 are being used. Germanium (Ge) is a promising material to replace or integrate with silicon (Si) to continue the trend of Moore’s law. Germanium has better intrinsic mobilities than silicon and is also silicon fab compatible so it would be an ideal material choice to integrate into silicon-based technologies. The progression towards nanoelectronics requires a lot of in depth studies. Dynamic TEM studies allow observations of reactions to allow a better understanding of mechanisms and how an external stimulus may affect a material/structure. This thesis details in situ TEM experiments to investigate some essential processes for germanium nanowire (NW) integration into nanoelectronic devices; i.e. doping and Ohmic contact formation. Chapter 1 reviews recent advances in dynamic TEM studies on semiconductor (namely silicon and germanium) nanostructures. The areas included are nanowire/crystal growth, germanide/silicide formation, irradiation, electrical biasing, batteries and strain. Chapter 2 details the study of ion irradiation and the damage incurred in germanium nanowires. An experimental set-up is described to allow for concurrent observation in the TEM of a nanowire following sequential ion implantation steps. Grown nanowires were deposited on a FIB labelled SiN membrane grid which facilitated HRTEM imaging and facile navigation to a specific nanowire. Cross sections of irradiated nanowires were also performed to evaluate the damage across the nanowire diameter. Experiments were conducted at 30 kV and 5 kV ion energies to study the effect of beam energy on nanowires of varied diameters. The results on nanowires were also compared to the damage profile in bulk germanium with both 30 kV and 5 kV ion beam energies. Chapter 3 extends the work from chapter 2 whereby nanowires are annealed post ion irradiation. In situ thermal annealing experiments were conducted to observe the recrystallization of the nanowires. A method to promote solid phase epitaxial growth is investigated by irradiating only small areas of a nanowire to maintain a seed from which the epitaxial growth can initiate. It was also found that strain in the nanowire greatly effects defect formation and random nucleation and growth. To obtain full recovery of the crystal structure of a nanowire, a stable support which reduces strain in the nanowire is essential as well as containing a seed from which solid phase epitaxial growth can initiate. Chapter 4 details the study of nickel germanide formation in germanium nanostructures. Rows of EBL (electron beam lithography) defined Ni-capped germanium nanopillars were extracted in FIB cross sections and annealed in situ to observe the germanide formation. Chapter 5 summarizes the key conclusions of each chapter and discusses an outlook on the future of germanium nanowire studies to facilitate their future incorporation into nanodevices.
Resumo:
Northern Ireland (NI) is emerging from a violent period in its troubled history and remains a
society characterized by segregation between its two main communities. Nowhere is this more
apparent than in education, where for the most part Catholic and Protestant pupils are
educated separately. During the last 30 years there has been a twofold pressure placed on the
education system in NI - at one level to respond to intergroup tensions by promoting
reconciliation, and at another, to deal with national policy demands derived from a global neoliberalist
economic agenda. With reference to current efforts to promote shared education
between separate schools, we explore the uneasy dynamic between a school-based
reconciliation programme in a transitioning society and system-wide values that are driven by
neo-liberalism and its organizational manifestation - new managerialism. We argue that whilst
the former seeks to promote social democratic ideals in education that can have a potentially
transformative effect at societal level, neoliberal priorities have the potential to both subvert
shared education and also to embed it.
Resumo:
As part of the ultrafast charge dynamics initiated by high intensity laser irradiations of solid targets,high amplitude EM pulses propagate away from the interaction point and are transported along anystalks and wires attached to the target. The propagation of these high amplitude pulses along a thinwire connected to a laser irradiated target was diagnosed via the proton radiography technique,measuring a pulse duration of 20 ps and a pulse velocity close to the speed of light. The strongelectric field associated with the EM pulse can be exploited for controlling dynamically the protonbeams produced from a laser-driven source. Chromatic divergence control of broadband laser drivenprotons (upto 75% reduction in divergence of >5 MeV protons) was obtained by winding the supportingwire around the proton beam axis to create a helical coil structure. In addition to providingfocussing and energy selection, the technique has the potential to post-accelerate the transiting protonsby the longitudinal component of the curved electric field lines produced by the helical coil lens.
Resumo:
In today's fast-paced and interconnected digital world, the data generated by an increasing number of applications is being modeled as dynamic graphs. The graph structure encodes relationships among data items, while the structural changes to the graphs as well as the continuous stream of information produced by the entities in these graphs make them dynamic in nature. Examples include social networks where users post status updates, images, videos, etc.; phone call networks where nodes may send text messages or place phone calls; road traffic networks where the traffic behavior of the road segments changes constantly, and so on. There is a tremendous value in storing, managing, and analyzing such dynamic graphs and deriving meaningful insights in real-time. However, a majority of the work in graph analytics assumes a static setting, and there is a lack of systematic study of the various dynamic scenarios, the complexity they impose on the analysis tasks, and the challenges in building efficient systems that can support such tasks at a large scale. In this dissertation, I design a unified streaming graph data management framework, and develop prototype systems to support increasingly complex tasks on dynamic graphs. In the first part, I focus on the management and querying of distributed graph data. I develop a hybrid replication policy that monitors the read-write frequencies of the nodes to decide dynamically what data to replicate, and whether to do eager or lazy replication in order to minimize network communication and support low-latency querying. In the second part, I study parallel execution of continuous neighborhood-driven aggregates, where each node aggregates the information generated in its neighborhoods. I build my system around the notion of an aggregation overlay graph, a pre-compiled data structure that enables sharing of partial aggregates across different queries, and also allows partial pre-computation of the aggregates to minimize the query latencies and increase throughput. Finally, I extend the framework to support continuous detection and analysis of activity-based subgraphs, where subgraphs could be specified using both graph structure as well as activity conditions on the nodes. The query specification tasks in my system are expressed using a set of active structural primitives, which allows the query evaluator to use a set of novel optimization techniques, thereby achieving high throughput. Overall, in this dissertation, I define and investigate a set of novel tasks on dynamic graphs, design scalable optimization techniques, build prototype systems, and show the effectiveness of the proposed techniques through extensive evaluation using large-scale real and synthetic datasets.
Resumo:
PURPOSE We aimed to evaluate the added value of diffusion-weighted imaging (DWI) to standard magnetic resonance imaging (MRI) for detecting post-treatment cervical cancer recurrence. The detection accuracy of T2-weighted (T2W) images was compared with that of T2W MRI combined with either dynamic contrast-enhanced (DCE) MRI or DWI. METHODS Thirty-eight women with clinically suspected uterine cervical cancer recurrence more than six months after treatment completion were examined with 1.5 Tesla MRI including T2W, DCE, and DWI sequences. Disease was confirmed histologically and correlated with MRI findings. The diagnostic performance of T2W imaging and its combination with either DCE or DWI were analyzed. Sensitivity, positive predictive value, and accuracy were calculated. RESULTS Thirty-six women had histologically proven recurrence. The accuracy for recurrence detection was 80% with T2W/DCE MRI and 92.1% with T2W/DWI. The addition of DCE sequences did not significantly improve the diagnostic ability of T2W imaging, and this sequence combination misclassified two patients as falsely positive and seven as falsely negative. The T2W/DWI combination revealed a positive predictive value of 100% and only three false negatives. CONCLUSION The addition of DWI to T2W sequences considerably improved the diagnostic ability of MRI. Our results support the inclusion of DWI in the initial MRI protocol for the detection of cervical cancer recurrence, leaving DCE sequences as an option for uncertain cases.
Resumo:
Os tecomas são tumores raros do ovário, do grupo dos tumores dos cordões sexuais, de natureza sólida e frequentemente unilaterais. Têm maior incidência no período pósmenopausa e normalmente são silenciosos. Quando sintomáticos traduzem-se por dor pélvica e metrorragia (condicionada pela habitual natureza produtora de estrogénios do tumor). Podem ser concomitantes a síndrome de Meigs e/ou de Golin-Goltz e associaremse a transformação benigna ou maligna do endométrio. Embora a ecografia possa ser inespecífica neste contexto, uma avaliação multiparamétrica abrangente em ressonância magnética, incluindo por estudo dinâmico e com ponderação em difusão, permite frequentemente orientar de modo favorável a marcha diagnóstica. Apresentamos um caso raro de tecoma do ovário, com espessamento associado do endométrio, avaliado por ecografia ginecológica por vias supra-púbica e transvaginal bem como tomografia computorizada e ressonância magnética, confirmado cirurgicamente. Tratou-se de uma examinada caucasiana de 61 anos de idade, apresentando-se com metrorragia pósmenopáusica, sem outros sintomas nem contexto familiar relevante. Procedeu-se, a este propósito, a uma revisão da literatura focada no diagnóstico multimodal diferencial, apresentação clínica, tratamento e prognóstico destes tumores.
Resumo:
Four years after the completion of the Human Genome Project, the US National Institutes for Health launched the Human Microbiome Project on 19 December 2007. Using metaphor analysis, this article investigates reporting in English-language newspapers on advances in microbiomics from 2003 onwards, when the word “microbiome” was first used. This research was said to open up a “new frontier” and was conceived as a “second human genome project”, this time focusing on the genomes of microbes that inhabit and populate humans rather than focusing on the human genome itself. The language used by scientists and by the journalists who reported on their research employed a type of metaphorical framing that was very different from the hyperbole surrounding the decipherment of the “book of life”. Whereas during the HGP genomic successes had been mainly framed as being based on a unidirectional process of reading off information from a passive genetic or genomic entity, the language employed to discuss advances in microbiomics frames genes, genomes and life in much more active and dynamic ways.
Resumo:
Investigation of large, destructive earthquakes is challenged by their infrequent occurrence and the remote nature of geophysical observations. This thesis sheds light on the source processes of large earthquakes from two perspectives: robust and quantitative observational constraints through Bayesian inference for earthquake source models, and physical insights on the interconnections of seismic and aseismic fault behavior from elastodynamic modeling of earthquake ruptures and aseismic processes.
To constrain the shallow deformation during megathrust events, we develop semi-analytical and numerical Bayesian approaches to explore the maximum resolution of the tsunami data, with a focus on incorporating the uncertainty in the forward modeling. These methodologies are then applied to invert for the coseismic seafloor displacement field in the 2011 Mw 9.0 Tohoku-Oki earthquake using near-field tsunami waveforms and for the coseismic fault slip models in the 2010 Mw 8.8 Maule earthquake with complementary tsunami and geodetic observations. From posterior estimates of model parameters and their uncertainties, we are able to quantitatively constrain the near-trench profiles of seafloor displacement and fault slip. Similar characteristic patterns emerge during both events, featuring the peak of uplift near the edge of the accretionary wedge with a decay toward the trench axis, with implications for fault failure and tsunamigenic mechanisms of megathrust earthquakes.
To understand the behavior of earthquakes at the base of the seismogenic zone on continental strike-slip faults, we simulate the interactions of dynamic earthquake rupture, aseismic slip, and heterogeneity in rate-and-state fault models coupled with shear heating. Our study explains the long-standing enigma of seismic quiescence on major fault segments known to have hosted large earthquakes by deeper penetration of large earthquakes below the seismogenic zone, where mature faults have well-localized creeping extensions. This conclusion is supported by the simulated relationship between seismicity and large earthquakes as well as by observations from recent large events. We also use the modeling to connect the geodetic observables of fault locking with the behavior of seismicity in numerical models, investigating how a combination of interseismic geodetic and seismological estimates could constrain the locked-creeping transition of faults and potentially their co- and post-seismic behavior.
Resumo:
Fixed bed CO2 adsorption tests were carried out in model flue-gas streams onto two commercial activated carbons, namely Filtrasorb 400 and Nuchar RGC30, at 303 K, 323 K and 353 K. Thermodynamic adsorption results highlighted that the presence of a narrower micropore size distribution with a prevailing contribution of very small pore diameters, observed for Filtrasorb 400, is a key factor in determining a higher CO2 capture capacity, mostly at low temperature. These experimental evidences were also corroborated by the higher value of the isosteric heat derived for Filtrasorb 400, testifying stronger interactions with CO2 molecules with respect to Nuchar RGC30. Dynamic adsorption results on the investigated sorbents highlighted the important role played by both a greater contribution of mesopores and the presence of wider micropores for Nuchar RGC30 in establishing faster capture kinetics with respect to Filtrasorb 400, in particular at 303 K. Furthermore, the modeling analysis of 15% CO2 breakthrough curves allowed identifying intraparticle diffusion as the rate-determining step of the process.
Resumo:
Language is a unique aspect of human communication because it can be used to discuss itself in its own terms. For this reason, human societies potentially have superior capacities of co-ordination, reflexive self-correction, and innovation than other animal, physical or cybernetic systems. However, this analysis also reveals that language is interconnected with the economically and technologically mediated social sphere and hence is vulnerable to abstraction, objectification, reification, and therefore ideology – all of which are antithetical to its reflexive function, whilst paradoxically being a fundamental part of it. In particular, in capitalism, language is increasingly commodified within the social domains created and affected by ubiquitous communication technologies. The advent of the so-called ‘knowledge economy’ implicates exchangeable forms of thought (language) as the fundamental commodities of this emerging system. The historical point at which a ‘knowledge economy’ emerges, then, is the critical point at which thought itself becomes a commodified ‘thing’, and language becomes its “objective” means of exchange. However, the processes by which such commodification and objectification occurs obscures the unique social relations within which these language commodities are produced. The latest economic phase of capitalism – the knowledge economy – and the obfuscating trajectory which accompanies it, we argue, is destroying the reflexive capacity of language particularly through the process of commodification. This can be seen in that the language practices that have emerged in conjunction with digital technologies are increasingly non-reflexive and therefore less capable of self-critical, conscious change.