961 resultados para Zuccotti, Susan: Under his very windows
Resumo:
Die Großherzog Wilhelm Ernst Ausgabe deutscher Klassiker wurde seit 1904 bis in die Zwanziger Jahre hinein im Insel Verlag in Leipzig publiziert. Die Buchreihe hat nicht nur für den Verlag und die Druckerei Poeschel in der sie gedruckt wurde eine ganze Reihe von Neuerungen nach sich gezogen, auch für den deutschen Buchmarkt hat die Klassikerausgabe einen Meilenstein bedeutet. Sie hat einige Eigenschaften des Taschenbuches vorweggenommen. Sie orientierte sich an der Qualität bibliophiler Buchpublikationen, aber war dennoch preislich erschwinglich. Zeitgenössische Klassikerausgaben erschienen zumeist mit einem Kommentar. Nicht so die Großherzog Wilhelm Ernst Ausgabe. Der Text wurde zwar von führenden Wissenschaftlern editiert, aber sie war dennoch unkommentiert. Der Text war in einer Jenson-Antiqua gesetzt obwohl die Debatte um individuell gestaltete Künstlerschriften und die Diskussion um die als deutsche Schrift begriffene Fraktur unter den wichtigsten Protagonisten des deutschen Buchgewerbes ihren Höhepunkt noch nicht erreicht hatte. Ziel für die Klassikerausgabe war darüber hinaus, das zur Jahrhundertwende leicht angestaubte Image der Stadt Weimar aufzupolieren. Über das Patronat des Großherzogs hinaus hätte man die Gewinne aus dem Verkauf der Bücher der Permanenten Ausstellung für die Anschaffung von modernen Kunstobjekten zur Verfügung stellen wollen, die unter der Leitung von Harry Graf Kessler stand. Sieht man den Inhalt der Werke der in der Klassikerreihe erschienen Dichter Goethe, Schiller und Körner in einem ästhetischen Kontext mit dem der Philosophen Schopenhauer und Kant, wird im Spiegel der Formalästhetik der Klassikerausgabe Graf Kesslers Bildungs- und Kulturbegriff erkennbar, der sich in den Jahren nach der Jahrhundertwende zu seinem Lebenskunstideal verdichtete. Der zerrütteten Existenz der Zeitgenossen, wie Friedrich Nietzsche sie beschrieben hatte, sollte der Inhalt der Ausgabe in seiner modernen Form eine moderne Wertehaltung entgegensetzen. Die Lektüre der Klassiker sollte den deutschen Philister „entkrampfen“ und ihm ein Stück der verloren geglaubten Lebensfreude wieder zurück bringen, in dem dieser auch die Facetten des Lebensleids als normal hinnehmen und akzeptieren lernte. Die Klassikerausgabe repräsentierte aus diesem Grund auch den kulturellen und politischen Reformwillen und die gesellschaftlichen Vorstellungen die der Graf für ein modernes Deutschland als überfällig erachtete. Die Buchreihe war aus diesem Grund auch ein politisches Statement gegen die Beharrungskräfte im deutschen Kaiserreich. Die Klassikerreihe wurde in der buchhistorischen Forschung zwar als bedeutender Meilenstein charakterisiert und als „wichtiges“ oder gar „revolutionäres“ Werk der Zeit hervorgehoben, die Ergebnisse der Forschung kann man überspitzt aber in der Aussage zusammenfassen, dass es sich bei der Großherzog Wilhelm Ernst Ausgabe um einen „zufälligen Glückstreffer“ deutscher Buchgestaltung zu handeln scheint. Zumindest lassen die Aussagen, die bisher in dieser Hinsicht gemacht wurden, keine eindeutige Einordnung zu, außer vielleicht der, dass die Klassiker von der englischen Lebensreform inspiriert wurden und Henry van de Velde und William Morris einen Einfluss auf ihre äußere Form hatten. Gerade die Gedankenansätze dieser Beiden nutzte Graf Kessler aber für eigene Überlegungen, die ihn schließlich auch zu eigenen Vorstellungen von idealer Buchgestaltung brachten. Da für Kessler auch Gebrauchsgegenstände Kunst sein konnten, wird das Konzept der Klassikerausgabe bis zur Umsetzung in ihrer `bahnbrechenden´ Form in das ideengeschichtliche und ästhetische Denken des Grafen eingeordnet. Die Klassiker werden zwar in buchhistorischen Einzeluntersuchungen bezüglich ihrer Komponenten, dem Dünndruckpapier, ihrem Einband oder der Schrifttype exponiert. In buchwissenschaftlichen Überblicksdarstellungen wird ihr Einfluss hingegen weniger beachtet, denn verschiedene Kritiker bezogen sie seit ihrem ersten Erscheinen nicht als deutsches Kulturgut mit ein, denn sie lehnten sowohl die englischen Mitarbeiter Emery Walker, Edward Johnston, Eric Gill und Douglas Cockerell wie auch ihre Gestaltung als „welsche“ Buchausgabe ab. Richtig ist, die Großherzog Wilhelm Ernst Ausgabe hatte dieselbe Funktion wie die von Graf Kessler in Weimar konzipierten Kunstausstellungen und die dortige Kunstschule unter der Leitung seines Freundes Henry van de Velde. Auch das für Weimar geplante Theater, das unter der Leitung von Hugo von Hofmannsthal hätte stehen sollen und die Großherzog Wilhelm Ernst Schule, hätten dieselben Ideen der Moderne mit anderen Mitteln transportieren sollen, wie die Großherzog Wilhelm Ernst Ausgabe deutscher Klassiker.
Resumo:
Nitrogen is an essential nutrient. It is for human, animal and plants a constituent element of proteins and nucleic acids. Although the majority of the Earth’s atmosphere consists of elemental nitrogen (N2, 78 %) only a few microorganisms can use it directly. To be useful for higher plants and animals elemental nitrogen must be converted to a reactive oxidized form. This conversion happens within the nitrogen cycle by free-living microorganisms, symbiotic living Rhizobium bacteria or by lightning. Humans are able to synthesize reactive nitrogen through the Haber-Bosch process since the beginning of the 20th century. As a result food security of the world population could be improved noticeably. On the other side the increased nitrogen input results in acidification and eutrophication of ecosystems and in loss of biodiversity. Negative health effects arose for humans such as fine particulate matter and summer smog. Furthermore, reactive nitrogen plays a decisive role at atmospheric chemistry and global cycles of pollutants and nutritive substances.rnNitrogen monoxide (NO) and nitrogen dioxide (NO2) belong to the reactive trace gases and are grouped under the generic term NOx. They are important components of atmospheric oxidative processes and influence the lifetime of various less reactive greenhouse gases. NO and NO2 are generated amongst others at combustion process by oxidation of atmospheric nitrogen as well as by biological processes within soil. In atmosphere NO is converted very quickly into NO2. NO2 is than oxidized to nitrate (NO3-) and to nitric acid (HNO3), which bounds to aerosol particles. The bounded nitrate is finally washed out from atmosphere by dry and wet deposition. Catalytic reactions of NOx are an important part of atmospheric chemistry forming or decomposing tropospheric ozone (O3). In atmosphere NO, NO2 and O3 are in photosta¬tionary equilibrium, therefore it is referred as NO-NO2-O3 triad. At regions with elevated NO concentrations reactions with air pollutions can form NO2, altering equilibrium of ozone formation.rnThe essential nutrient nitrogen is taken up by plants mainly by dissolved NO3- entering the roots. Atmospheric nitrogen is oxidized to NO3- within soil via bacteria by nitrogen fixation or ammonium formation and nitrification. Additionally atmospheric NO2 uptake occurs directly by stomata. Inside the apoplast NO2 is disproportionated to nitrate and nitrite (NO2-), which can enter the plant metabolic processes. The enzymes nitrate and nitrite reductase convert nitrate and nitrite to ammonium (NH4+). NO2 gas exchange is controlled by pressure gradients inside the leaves, the stomatal aperture and leaf resistances. Plant stomatal regulation is affected by climate factors like light intensity, temperature and water vapor pressure deficit. rnThis thesis wants to contribute to the comprehension of the effects of vegetation in the atmospheric NO2 cycle and to discuss the NO2 compensation point concentration (mcomp,NO2). Therefore, NO2 exchange between the atmosphere and spruce (Picea abies) on leaf level was detected by a dynamic plant chamber system under labo¬ratory and field conditions. Measurements took place during the EGER project (June-July 2008). Additionally NO2 data collected during the ECHO project (July 2003) on oak (Quercus robur) were analyzed. The used measuring system allowed simultaneously determina¬tion of NO, NO2, O3, CO2 and H2O exchange rates. Calculations of NO, NO2 and O3 fluxes based on generally small differences (∆mi) measured between inlet and outlet of the chamber. Consequently a high accuracy and specificity of the analyzer is necessary. To achieve these requirements a highly specific NO/NO2 analyzer was used and the whole measurement system was optimized to an enduring measurement precision.rnData analysis resulted in a significant mcomp,NO2 only if statistical significance of ∆mi was detected. Consequently, significance of ∆mi was used as a data quality criterion. Photo-chemical reactions of the NO-NO2-O3 triad in the dynamic plant chamber’s volume must be considered for the determination of NO, NO2, O3 exchange rates, other¬wise deposition velocity (vdep,NO2) and mcomp,NO2 will be overestimated. No significant mcomp,NO2 for spruce could be determined under laboratory conditions, but under field conditions mcomp,NO2 could be identified between 0.17 and 0.65 ppb and vdep,NO2 between 0.07 and 0.42 mm s-1. Analyzing field data of oak, no NO2 compensation point concentration could be determined, vdep,NO2 ranged between 0.6 and 2.71 mm s-1. There is increasing indication that forests are mainly a sink for NO2 and potential NO2 emissions are low. Only when assuming high NO soil emissions, more NO2 can be formed by reaction with O3 than plants are able to take up. Under these circumstance forests can be a source for NO2.
Resumo:
The main objective of this thesis is to obtain a better understanding of the methods to assess the stability of a slope. We have illustrated the principal variants of the Limit Equilibrium (LE) method found in literature, focalizing our attention on the Minimum Lithostatic Deviation (MLD) method, developed by Prof. Tinti and his collaborators (e.g. Tinti and Manucci, 2006, 2008). We had two main goals: the first was to test the MLD method on some real cases. We have selected the case of the Vajont landslide with the objective to reconstruct the conditions that caused the destabilization of Mount Toc, and two sites in the Norwegian margin, where failures has not occurred recently, with the aim to evaluate the present stability state and to assess under which conditions they might be mobilized. The second goal was to study the stability charts by Taylor and by Michalowski, and to use the MLD method to investigate the correctness and adequacy of this engineering tool.
Resumo:
This dissertation studies the geometric static problem of under-constrained cable-driven parallel robots (CDPRs) supported by n cables, with n ≤ 6. The task consists of determining the overall robot configuration when a set of n variables is assigned. When variables relating to the platform posture are assigned, an inverse geometric static problem (IGP) must be solved; whereas, when cable lengths are given, a direct geometric static problem (DGP) must be considered. Both problems are challenging, as the robot continues to preserve some degrees of freedom even after n variables are assigned, with the final configuration determined by the applied forces. Hence, kinematics and statics are coupled and must be resolved simultaneously. In this dissertation, a general methodology is presented for modelling the aforementioned scenario with a set of algebraic equations. An elimination procedure is provided, aimed at solving the governing equations analytically and obtaining a least-degree univariate polynomial in the corresponding ideal for any value of n. Although an analytical procedure based on elimination is important from a mathematical point of view, providing an upper bound on the number of solutions in the complex field, it is not practical to compute these solutions as it would be very time-consuming. Thus, for the efficient computation of the solution set, a numerical procedure based on homotopy continuation is implemented. A continuation algorithm is also applied to find a set of robot parameters with the maximum number of real assembly modes for a given DGP. Finally, the end-effector pose depends on the applied load and may change due to external disturbances. An investigation into equilibrium stability is therefore performed.
Resumo:
In this thesis we are presenting a broadly based computer simulation study of two-dimensional colloidal crystals under different external conditions. In order to fully understand the phenomena which occur when the system is being compressed or when the walls are being sheared, it proved necessary to study also the basic motion of the particles and the diffusion processes which occur in the case without these external forces. In the first part of this thesis we investigate the structural transition in the number of rows which occurs when the crystal is being compressed by placing the structured walls closer together. Previous attempts to locate this transition were impeded by huge hysteresis effects. We were able to determine the transition point with higher precision by applying both the Schmid-Schilling thermodynamic integration method and the phase switch Monte Carlo method in order to determine the free energies. These simulations showed not only that the phase switch method can successfully be applied to systems with a few thousand particles and a soft crystalline structure with a superimposed pattern of defects, but also that this method is way more efficient than a thermodynamic integration when free energy differences are to be calculated. Additionally, the phase switch method enabled us to distinguish between several energetically very similar structures and to determine which one of them was actually stable. Another aspect considered in the first result chapter of this thesis is the ensemble inequivalence which can be observed when the structural transition is studied in the NpT and in the NVT ensemble. The second part of this work deals with the basic motion occurring in colloidal crystals confined by structured walls. Several cases are compared where the walls are placed in different positions, thereby introducing an incommensurability into the crystalline structure. Also the movement of the solitons, which are created in the course of the structural transition, is investigated. Furthermore, we will present results showing that not only the well-known mechanism of vacancies and interstitial particles leads to diffusion in our model system, but that also cooperative ring rotation phenomena occur. In this part and the following we applied Langevin dynamics simulations. In the last chapter of this work we will present results on the effect of shear on the colloidal crystal. The shear was implemented by moving the walls with constant velocity. We have observed shear banding and, depending on the shear velocity, that the inner part of the crystal breaks into several domains with different orientations. At very high shear velocities holes are created in the structure, which originate close to the walls, but also diffuse into the inner part of the crystal.
Resumo:
Globalization has influenced all economic sectors and the demand for translation services has increased like never before. The videogame industry has become a worldwide phenomenon worth billions. Many people around the globe, male and female, children and adults alike, choose this leisure activity and enjoy it like reading or watching a film. It is a global phenomenon capable of producing as much revenue and anticipation as the film industry. Most games are developed in Japanese or English and the new global market requires this product to be translated into many other languages. The scenario has brought about a new field of specialization in translation studies, commonly known as videogame localization. The emergence of this new field calls not only for a review of translation studies, but also a shift in the role that some translators and translated products are expected to play within a globalized world. The aim of this dissertation is to provide an overview of videogame localization and its challenges under the guidance of a professional translator such as Alexander O. Smith, who agreed to provide counsel through several Skype interviews. This provided a first-hand insight into how translation decisions are carried out by game translators. Alexander O. Smith was a former translator for Square Enix, one of the biggest Japanese videogame developer, publisher and distribution company in the market. He now works as an independent translator and in 2003 he founded the localization agency called Kajiya Productions with his friend and fellow translator Joseph Reeder. Together with Alexander O. Smith, the twelfth installment of the Final Fantasy series by Square Enix has been chosen as a very good example of the issues and challenges brought on by videogame localization. The game which revealed itself to be one of the most fun, challenging and rewarding professional experiences of Alexander O. Smith.
Resumo:
Reform is a word that, one might easily say, characterizes more than any other the history and development of Buddhism. Yet, it must also be said that reform movements in East Asian Buddhism have often taken on another goal—harmony or unification; that is, a desire not only to reconstruct a more worthy form of Buddhism, but to simultaneously bring together all existing forms under a single banner, in theory if not in practice. This paper explores some of the tensions between the desire for reform and the quest for harmony in modern Japanese Buddhism thought, by comparing two developments: the late 19th century movement towards ‘New Buddhism’ (shin Bukkyō) as exemplified by Murakami Senshō 村上専精 (1851–1929), and the late 20th century movement known as ‘Critical Buddhism’ (hihan Bukkyō), as found in the works of Matsumoto Shirō 松本史朗 and Hakamaya Noriaki 袴谷憲昭. In all that has been written about Critical Buddhism, in both Japanese and English, very little attention has been paid to the place of the movement within the larger traditions of Japanese Buddhist reform. Here I reconsider Critical Buddhism in relation to the concerns of the previous, much larger trends towards Buddhist reform that emerged almost exactly 100 years previous—the so-called shin Bukkyō or New Buddhism of the late-Meiji era. Shin Bukkyō is a catch-all term that includes the various writings and activities of Inoue Enryō, Shaku Sōen, and Kiyozawa Manshi, as well as the so-called Daijō-hibussetsuron, a broad term used (often critically) to describe Buddhist writers who suggested that Mahāyāna Buddhism is not, in fact, the Buddhism taught by the ‘historical’ Buddha Śākyamuni. Of these, I will make a few general remarks about Daijō-hibusseturon, before turning attention more specifically to the work of Murakami Senshō, in order to flesh out some of the similarities and differences between his attempt to construct a ‘unified Buddhism’ and the work of his late-20th century avatars, the Critical Buddhists. Though a number of their aims and ideas overlap, I argue that there remain fundamental differences with respect to the ultimate purposes of Buddhist reform. This issue hinges on the implications of key terms such as ‘unity’ and ‘harmony’ as well as the way doctrinal history is categorized and understood, but it also relates to issues of ideology and the use and abuse of Buddhist doctrines in 20th-century politics.
Resumo:
Patients with penetrating eye injuries are a very heterogeneous group both medically and economically. Since 2009, treatment involving sutures for open eye injuries and cases requiring amniotic membrane transplantation (AMT) were allocated to DRG C01B of the German diagnosis-related group system. However, given the significant clinical differences between these treatments, an inhomogeneity of costs to performance is postulated. This analysis describes case allocation problems within the G-DRG C01B category and presents solutions.
Resumo:
International migration has increased rapidly in the Czech Republic, with more than 150,000 legally registered foreign residents at the end of 1996. A large proportion of these are in Prague - 35% of the total in December 1996. The aim of this project was to enrich the fund of information concerning the "environment", reasons and "mechanisms" behind immigration to the Czech Republic. Mr. Drbohlav looked first at the empirical situation and on this basis set out to test certain well-known migration theories. He focused on four main areas: 1) a detailed description and explanation of the stock of foreign citizens legally settled in Czech territory, concentrating particularly on "economic" migrants; 2) a questionnaire survey targeting a total of 192 Ukrainian workers (98 in the fall 1995 and 94 in the fall 1996) working in Prague or its vicinity; 3) a second questionnaire survey of 40 "western" firms (20 in 1996 and 20 in 1997) operating out of Prague; 4) an opinion poll on how the Czech population reacts to foreign workers in the CR. Over 80% of economic immigrants at the end of 1996 were from European countries, 16% from Asia and under 2% from North America. The largest single nationalities were Ukrainians, Slovaks, Vietnamese and Poles. There has been a huge increase in the Ukrainian immigrant community over both space (by region) and time (a ten-fold increase since 1993), and at 40,000 persons this represents one third of all legal immigrants. Indications are that many more live and work there illegally. Young males with low educational/skills levels predominate, in contrast with the more heterogeneous immigration from the "West". The primary reason for this migration is the higher wages in the Czech Republic. In 1994 the relative figures of GDP adjusted for parity of purchasing power were US$ 8,095 for the Czech Republic versus US$ 3,330 for the Ukraine as a whole and US$ 1,600 for the Zakarpatye region from which 49% of the respondents in the survey came. On an individual level, the average Czech wage is about US$ 330 per month, while 50% of the Ukrainian respondents put their last monthly wage before leaving for the Czech Republic at under US$ 27. The very low level of unemployment in the latter country (fluctuating around 4%) was also mentioned as an important factor. Migration was seen as a way of diversifying the family's source of income and 49% of the respondents had made their plans together with partners or close relatives, while 45% regularly send remittances to Ukraine (94% do so through friends or relatives). Looking at Ukrainian migration from the point of view of the dual market theory, these migrants' type and conditions of work, work load and earnings were all significantly worse than in the primary sector, which employs well educated people and offers them good earnings, job security and benefits. 53% of respondents were working and/or staying in the Czech Republic illegally at the time of the research, 73% worked as unqualified, unskilled workers or auxiliary workers, 62% worked more than 12 hours a day, and 40% evaluated their working conditions as hard. 51% had no days off, earnings were low in relation to the number of hours worked. and 85% said that their earnings did not increase over time. Nearly half the workers were recruited in Ukraine and only 4% expressed a desire to stay in the Czech Republic. Network theories were also borne out to some extent as 33% of immigrants came together with friends from the same village, town or region in Ukraine. The number who have relatives working in the Czech Republic is rising, and many wish to invite relatives or children to visit them. The presence of organisations which organised cross-border migration, including some which resort to organising illegal documents, also gives some support for the institutional theory. Mr. Drbohlav found that all the migration theories considered offered some insights on the situation, but that none was sufficient to explain it all. He also points out parallels with many other regions of the world, including Central America, South and North America, Melanesia, Indonesia, East Africa, India, the Middle East and Russia. For the survey of foreign and international firms, those chosen were largely from countries represented by more than one company and were mainly active in market services such as financial and trade services, marketing and consulting. While 48% of the firms had more than 10,000 employees spread through many countries, more than two thirds had fewer than 50 employees in the Czech Republic. Czechs formed 80% plus of general staff in these firms although not more than 50% of senior management, and very few other "easterners" were employed. All companies absolutely denied employing people illegally. The average monthly wage of Czech staff was US$ 850, with that of top managers from the firm's "mother country" being US$ 6,350 and that of other western managers US$ 3,410. The foreign staff were generally highly mobile and were rarely accompanied by their families. Most saw their time in the Czech Republic as positive for their careers but very few had any intention of remaining there. Factors in the local situation which were evaluated positively included market opportunities, the economic and political environment, the quality of technical and managerial staff, and cheap labour and low production costs. In contrast, the level of appropriate business ethics and conduct, the attitude of local and regional authorities, environmental production conditions, the legal environment and financial markets and fiscal policy were rated very low. In the final section of his work Mr. Drbohlav looked at the opinions expressed by the local Czech population in a poll carried out at the beginning of 1997. This confirmed that international labour migration has become visible in this country, with 43% of respondents knowing at least one foreigner employed by a Czech firm in this country. Perception differ according to the region from which the workers come and those from "the West" are preferred to those coming from further east. 49% saw their attitude towards the former as friendly but only 20% felt thus towards the latter. Overall, attitudes towards migrant workers is neutral, although 38% said that such workers should not have the same rights as Czech citizens. Sympathy towards foreign workers tends to increase with education and the standard of living, and the relatively positive attitudes towards foreigners in the South Bohemia region contradicted the frequent belief that a lack of experience of international migration lowers positive perceptions of it.
Resumo:
Erick Fahle Burman. a Swedish-born, Finnish-speaking labor and political activist, twice had cases argued on his behalf before the Michigan Supreme Court. In People vs. Burman, Burman, along with nine other defendants, had his conviction affirmed by the court and all ten were forced to pay a fine of $25 each for disturbing the peace. In People vs. Immonen, Burman and his co-defendant, Unto Immonen, had their convictions overturned because of improper evidence being admitted in their lower court trial. Though the conviction was overturned, the two men had already spent several months as prisoners at hard labor in Marquette State Prison located in Michigan's Upper Peninsula. Over twenty-five years separate Burman's two trips to Michigan's high court. On the first occasion, his arrest came less than five years after his arrival as an immigrant to the U. S. On the second occasion, his arrest came less than two years after his return to the state after being away for nearly two decades. On both occasions, Burman was arrested for his involvement with red flags. Though separated by decades, these cases, taken together, are important indicators of the state of Finnish-American radicalism in the years surrounding the red flag incidents and provide interesting insights into the delicacies of political suppression. Examination of these cases within the larger career of Fahle Burman points up his overlooked importance in the history of Finnish-American socialism and communism.
Resumo:
This dissertation has three separate parts: the first part deals with the general pedigree association testing incorporating continuous covariates; the second part deals with the association tests under population stratification using the conditional likelihood tests; the third part deals with the genome-wide association studies based on the real rheumatoid arthritis (RA) disease data sets from Genetic Analysis Workshop 16 (GAW16) problem 1. Many statistical tests are developed to test the linkage and association using either case-control status or phenotype covariates for family data structure, separately. Those univariate analyses might not use all the information coming from the family members in practical studies. On the other hand, the human complex disease do not have a clear inheritance pattern, there might exist the gene interactions or act independently. In part I, the new proposed approach MPDT is focused on how to use both the case control information as well as the phenotype covariates. This approach can be applied to detect multiple marker effects. Based on the two existing popular statistics in family studies for case-control and quantitative traits respectively, the new approach could be used in the simple family structure data set as well as general pedigree structure. The combined statistics are calculated using the two statistics; A permutation procedure is applied for assessing the p-value with adjustment from the Bonferroni for the multiple markers. We use simulation studies to evaluate the type I error rates and the powers of the proposed approach. Our results show that the combined test using both case-control information and phenotype covariates not only has the correct type I error rates but also is more powerful than the other existing methods. For multiple marker interactions, our proposed method is also very powerful. Selective genotyping is an economical strategy in detecting and mapping quantitative trait loci in the genetic dissection of complex disease. When the samples arise from different ethnic groups or an admixture population, all the existing selective genotyping methods may result in spurious association due to different ancestry distributions. The problem can be more serious when the sample size is large, a general requirement to obtain sufficient power to detect modest genetic effects for most complex traits. In part II, I describe a useful strategy in selective genotyping while population stratification is present. Our procedure used a principal component based approach to eliminate any effect of population stratification. The paper evaluates the performance of our procedure using both simulated data from an early study data sets and also the HapMap data sets in a variety of population admixture models generated from empirical data. There are one binary trait and two continuous traits in the rheumatoid arthritis dataset of Problem 1 in the Genetic Analysis Workshop 16 (GAW16): RA status, AntiCCP and IgM. To allow multiple traits, we suggest a set of SNP-level F statistics by the concept of multiple-correlation to measure the genetic association between multiple trait values and SNP-specific genotypic scores and obtain their null distributions. Hereby, we perform 6 genome-wide association analyses using the novel one- and two-stage approaches which are based on single, double and triple traits. Incorporating all these 6 analyses, we successfully validate the SNPs which have been identified to be responsible for rheumatoid arthritis in the literature and detect more disease susceptibility SNPs for follow-up studies in the future. Except for chromosome 13 and 18, each of the others is found to harbour susceptible genetic regions for rheumatoid arthritis or related diseases, i.e., lupus erythematosus. This topic is discussed in part III.
Resumo:
The present distribution of freshwater fish in the Alpine region has been strongly affected by colonization events occurring after the last glacial maximum (LGM), some 20,000 years ago. We use here a spatially explicit simulation framework to model and better understand their colonization dynamics in the Swiss Rhine basin. This approach is applied to the European bullhead (Cottus gobio), which is an ideal model organism to study fish past demographic processes since it has not been managed by humans. The molecular diversity of eight sampled populations is simulated and compared to observed data at six microsatellite loci under an approximate Bayesian computation framework to estimate the parameters of the colonization process. Our demographic estimates fit well with current knowledge about the biology of this species, but they suggest that the Swiss Rhine basin was colonized very recently, after the Younger Dryas some 6600 years ago. We discuss the implication of this result, as well as the strengths and limits of the spatially explicit approach coupled to the approximate Bayesian computation framework.
Resumo:
All optical systems that operate in or through the atmosphere suffer from turbulence induced image blur. Both military and civilian surveillance, gun-sighting, and target identification systems are interested in terrestrial imaging over very long horizontal paths, but atmospheric turbulence can blur the resulting images beyond usefulness. My dissertation explores the performance of a multi-frame-blind-deconvolution technique applied under anisoplanatic conditions for both Gaussian and Poisson noise model assumptions. The technique is evaluated for use in reconstructing images of scenes corrupted by turbulence in long horizontal-path imaging scenarios and compared to other speckle imaging techniques. Performance is evaluated via the reconstruction of a common object from three sets of simulated turbulence degraded imagery representing low, moderate and severe turbulence conditions. Each set consisted of 1000 simulated, turbulence degraded images. The MSE performance of the estimator is evaluated as a function of the number of images, and the number of Zernike polynomial terms used to characterize the point spread function. I will compare the mean-square-error (MSE) performance of speckle imaging methods and a maximum-likelihood, multi-frame blind deconvolution (MFBD) method applied to long-path horizontal imaging scenarios. Both methods are used to reconstruct a scene from simulated imagery featuring anisoplanatic turbulence induced aberrations. This comparison is performed over three sets of 1000 simulated images each for low, moderate and severe turbulence-induced image degradation. The comparison shows that speckle-imaging techniques reduce the MSE 46 percent, 42 percent and 47 percent on average for low, moderate, and severe cases, respectively using 15 input frames under daytime conditions and moderate frame rates. Similarly, the MFBD method provides, 40 percent, 29 percent, and 36 percent improvements in MSE on average under the same conditions. The comparison is repeated under low light conditions (less than 100 photons per pixel) where improvements of 39 percent, 29 percent and 27 percent are available using speckle imaging methods and 25 input frames and 38 percent, 34 percent and 33 percent respectively for the MFBD method and 150 input frames. The MFBD estimator is applied to three sets of field data and the results presented. Finally, a combined Bispectrum-MFBD Hybrid estimator is proposed and investigated. This technique consistently provides a lower MSE and smaller variance in the estimate under all three simulated turbulence conditions.
Resumo:
Due to widespread development of anthelmintic resistance in equine parasites, recommendations for their control are currently undergoing marked changes with a shift of emphasis toward more coprological surveillance and reduced treatment intensity. Denmark was the first nation to introduce prescription-only restrictions of anthelmintic drugs in 1999, but other European countries have implemented similar legislations over recent years. A questionnaire survey was performed in 2008 among Danish horse owners to provide a current status of practices and perceptions with relation to parasite control. Questions aimed at describing the current use of coprological surveillance and resulting anthelmintic treatment intensities, evaluating knowledge and perceptions about the importance of various attributes of parasite control, and assessing respondents' willingness to pay for advice and parasite surveillance services from their veterinarians. A total of 1060 respondents completed the questionnaire. A large majority of respondents (71.9%) were familiar with the concept of selective therapy. Results illustrated that the respondents' self-evaluation of their knowledge about parasites and their control associated significantly with their level of interest in the topic and their type of education (P<0.0001). The large majority of respondents either dewormed their horses twice a year and/or performed two fecal egg counts per horse per year. This approach was almost equally pronounced in foals, horses aged 1-3 years old, and adult horses. The respondents rated prevention of parasitic disease and prevention of drug resistance as the most important attributes, while cost and frequent fecal testing were rated least important. Respondents' actual spending on parasite control per horse in the previous year correlated significantly with the amount they declared themselves willing to spend (P<0.0001). However, 44.4% declared themselves willing to pay more than what they were spending. Altogether, results indicate that respondents were generally familiar with equine parasites and the concept of selective therapy, although there was some confusion over the terms small and large strongyles. They used a large degree of fecal surveillance in all age groups, with a majority of respondents sampling and/or treating around twice a year. Finally, respondents appeared willing to spend money on parasite control for their horses. It is of concern that the survey suggested that foals and young horses are treated in a manner very similar to adult horses, which is against current recommendations. Thus, the survey illustrates the importance of clear communication of guidelines for equine parasite control.
Resumo:
A non-parametric method was developed and tested to compare the partial areas under two correlated Receiver Operating Characteristic curves. Based on the theory of generalized U-statistics the mathematical formulas have been derived for computing ROC area, and the variance and covariance between the portions of two ROC curves. A practical SAS application also has been developed to facilitate the calculations. The accuracy of the non-parametric method was evaluated by comparing it to other methods. By applying our method to the data from a published ROC analysis of CT image, our results are very close to theirs. A hypothetical example was used to demonstrate the effects of two crossed ROC curves. The two ROC areas are the same. However each portion of the area between two ROC curves were found to be significantly different by the partial ROC curve analysis. For computation of ROC curves with large scales, such as a logistic regression model, we applied our method to the breast cancer study with Medicare claims data. It yielded the same ROC area computation as the SAS Logistic procedure. Our method also provides an alternative to the global summary of ROC area comparison by directly comparing the true-positive rates for two regression models and by determining the range of false-positive values where the models differ. ^