22 resultados para old-order mining right
em CentAUR: Central Archive University of Reading - UK
Resumo:
At the Paris Peace Conferences of 1918-1919, new states aspiring to be nation-states were created for 60 million people, but at the same time 25 million people found themselves as ethnic minorities. This change of the old order in Europe had a considerable impact on one such group, more than 3 million Bohemian German-speakers, later referred to as Sudeten Germans. After the demise of the Habsburg Empire In 1918, they became part of the new state of Czechoslovakia. In 1938, the Munich Agreement – prelude to the Second World War – integrated them into Hitler’s Reich; in 1945-1946 they were expelled from the reconstituted state of Czechoslovakia. At the centre of this War Child case study are German children from the Northern Bohemian town and district, formerly known as Gablonz an der Neisse, famous for exquisite glass art, now Jablonec nad Nisou in the Czech Republic. After their expulsion they found new homes in the post-war Federal Republic of Germany. In addition, testimonies have been drawn upon of some Czech eyewitnesses from the same area, who provided their perspective from the other side, as it were. It turned out to be an insightful case study of the fate of these communities, previously studied mainly within the context of the national struggle between Germans and Czechs. The inter-disciplinary research methodology adopted here combines history and sociological research to demonstrate the effect of larger political and social developments on human lives, not shying away from addressing sensitive political and historical issues, as far as these are relevant within the context of the study. The expellees started new lives in what became Neugablonz in post-war Bavaria where they successfully re-established the industries they had had to leave behind in 1945-1946. Part 1 of the study sheds light on the complex Czech-German relationship of this important Central European region, addressing issues of democracy, ethnicity, race, nationalism, geopolitics, economics, human geography and ethnography. It also charts the developments leading to the expulsion of the Sudeten Germans from Czechoslovakia after 1945. What is important in this War Child study is how the expellees remember their history while living as children in Sudetenland and later. The testimony data gained indicate that certain stereotypes often repeated within the context of Sudeten issues such as the confrontational nature of inter-ethnic relations are not reflected in the testimonies of the respondents from Gablonz. In Part 2 the War Child Study explores the memories of the former Sudeten war children using sociological research methods. It focuses on how they remember life in their Bohemian homeland and coped with the life-long effects of displacement after their expulsion. The study maps how they turned adversity into success by showing a remarkable degree of resilience and ingenuity in the face of testing circumstances due to the abrupt break in their lives. The thesis examines the reasons for the relatively positive outcome to respondents’ lives and what transferable lessons can be deduced from the results of this study.
Resumo:
Excavations on the multi-period settlement at Old Scatness, Shetland have uncovered a number of Iron Age structures with compacted, floor-like layers. Thin section analysis was undertaken in order to investigate and compare the characteristics of these layers. The investigation also draws on earlier analyses of the Iron Age agricultural soil around the settlement and the midden deposits that accumulated within the settlement, to create a 'joined-up' analysis which considers the way material from the settlement was used and then recycled as fertiliser for the fields. Peat was collected from the nearby uplands and was used for fuel and possibly also for flooring. It is suggested that organic-rich floors from the structures were periodically removed and the material was spread onto the fields as fertilisers. More organic-rich material may have been used selectively for fertiliser, while the less organic peat ash was allowed to accumulate in middens. Several of the structures may have functioned as byres, which suggests a prehistoric plaggen system.
Resumo:
We present a method to enhance fault localization for software systems based on a frequent pattern mining algorithm. Our method is based on a large set of test cases for a given set of programs in which faults can be detected. The test executions are recorded as function call trees. Based on test oracles the tests can be classified into successful and failing tests. A frequent pattern mining algorithm is used to identify frequent subtrees in successful and failing test executions. This information is used to rank functions according to their likelihood of containing a fault. The ranking suggests an order in which to examine the functions during fault analysis. We validate our approach experimentally using a subset of Siemens benchmark programs.
Resumo:
A wireless sensor network (WSN) is a group of sensors linked by wireless medium to perform distributed sensing tasks. WSNs have attracted a wide interest from academia and industry alike due to their diversity of applications, including home automation, smart environment, and emergency services, in various buildings. The primary goal of a WSN is to collect data sensed by sensors. These data are characteristic of being heavily noisy, exhibiting temporal and spatial correlation. In order to extract useful information from such data, as this paper will demonstrate, people need to utilise various techniques to analyse the data. Data mining is a process in which a wide spectrum of data analysis methods is used. It is applied in the paper to analyse data collected from WSNs monitoring an indoor environment in a building. A case study is given to demonstrate how data mining can be used to optimise the use of the office space in a building.
Resumo:
This review highlights the importance of right hemisphere language functions for successful social communication and advances the hypothesis that the core deficit in psychosis is a failure of segregation of right from left hemisphere functions. Lesion studies of stroke patients and dichotic listening and functional imaging studies of healthy people have shown that some language functions are mediated by the right hemisphere rather than the left. These functions include discourse planning/comprehension, understanding humour, sarcasm, metaphors and indirect requests, and the generation/comprehension of emotional prosody. Behavioural evidence indicates that patients with typical schizophrenic illnesses perform poorly on tests of these functions, and aspects of these functions are disturbed in schizo-affective and affective psychoses. The higher order language functions mediated by the right hemisphere are essential to an accurate understanding of someone's communicative intent, and the deficits displayed by patients with schizophrenia may make a significant contribution to their social interaction deficits. We outline a bi-hemispheric theory of the neural basis of language that emphasizes the role of the sapiens-specific cerebral torque in determining the four-chambered nature of the human brain in relation to the origins of language and the symptoms of schizophrenia. Future studies of abnormal lateralization of left hemisphere language functions need to take account of the consequences of a failure of lateralization of language functions to the right as well as the left hemisphere.
Resumo:
This RTD project, 2007-2009, is partly funded by the European Commission, in Framework Programme 6. It aims to assist elderly people for living well, independently and at case. ENABLE will provide a number of services for elderly people based on the new technology provided by mobile phones. The project is developing a Wrist unit with both integrated and external sensors, and with a radio frequency link to a mobile phone. Dedicated ENABLE software running on the wrist unit and mobile phone makes these services fully accessible for the elderly users. This paper outlines the fundamental motivation and the approach which currently is undertaken in order to collect the more detailed user needs and requirements. The general architecture and the design of the ENABLE system are outlined.
Resumo:
Aircraft Maintenance, Repair and Overhaul (MRO) agencies rely largely on row-data based quotation systems to select the best suppliers for the customers (airlines). The data quantity and quality becomes a key issue to determining the success of an MRO job, since we need to ensure we achieve cost and quality benchmarks. This paper introduces a data mining approach to create an MRO quotation system that enhances the data quantity and data quality, and enables significantly more precise MRO job quotations. Regular Expression was utilized to analyse descriptive textual feedback (i.e. engineer’s reports) in order to extract more referable highly normalised data for job quotation. A text mining based key influencer analysis function enables the user to proactively select sub-parts, defects and possible solutions to make queries more accurate. Implementation results show that system data would improve cost quotation in 40% of MRO jobs, would reduce service cost without causing a drop in service quality.
Resumo:
Aircraft Maintenance, Repair and Overhaul (MRO) feedback commonly includes an engineer’s complex text-based inspection report. Capturing and normalizing the content of these textual descriptions is vital to cost and quality benchmarking, and provides information to facilitate continuous improvement of MRO process and analytics. As data analysis and mining tools requires highly normalized data, raw textual data is inadequate. This paper offers a textual-mining solution to efficiently analyse bulk textual feedback data. Despite replacement of the same parts and/or sub-parts, the actual service cost for the same repair is often distinctly different from similar previously jobs. Regular expression algorithms were incorporated with an aircraft MRO glossary dictionary in order to help provide additional information concerning the reason for cost variation. Professional terms and conventions were included within the dictionary to avoid ambiguity and improve the outcome of the result. Testing results show that most descriptive inspection reports can be appropriately interpreted, allowing extraction of highly normalized data. This additional normalized data strongly supports data analysis and data mining, whilst also increasing the accuracy of future quotation costing. This solution has been effectively used by a large aircraft MRO agency with positive results.
Resumo:
Purpose – The purpose of this paper is to investigate the effect of choices of model structure and scale in development viability appraisal. The paper addresses two questions concerning the application of development appraisal techniques to viability modelling within the UK planning system. The first relates to the extent to which, given intrinsic input uncertainty, the choice of model structure significantly affects model outputs. The second concerns the extent to which, given intrinsic input uncertainty, the level of model complexity significantly affects model outputs. Design/methodology/approach – Monte Carlo simulation procedures are applied to a hypothetical development scheme in order to measure the effects of model aggregation and structure on model output variance. Findings – It is concluded that, given the particular scheme modelled and unavoidably subjective assumptions of input variance, that simple and simplistic models may produce similar outputs to more robust and disaggregated models. Evidence is found of equifinality in the outputs of a simple, aggregated model of development viability relative to more complex, disaggregated models. Originality/value – Development viability appraisal has become increasingly important in the planning system. Consequently, the theory, application and outputs from development appraisal are under intense scrutiny from a wide range of users. However, there has been very little published evaluation of viability models. This paper contributes to the limited literature in this area.
Resumo:
Over the past 10-15 years, several governments have implemented an array of technology, support-related, sustainable livelihoods (SL) and poverty-reduction projects for artisanal and small-scale mining (ASM). In the majority of cases, however, these interventions have failed to facilitate improvements in the industry's productivity and raise the living standards of the sector's subsistence operators. This article argues that a poor understanding of the demographics of target populations has precipitated these outcomes. In order to strengthen policy and assistance in the sector, governments must determine, with greater precision, the number of people operating in ASM regions, their origins and ethnic backgrounds, ages, and educational levels. This can be achieved by carrying out basic and localized census work before promoting ambitious sector-specific projects aimed at improving working conditions in the industry.
Resumo:
This paper investigates the effect of choices of model structure and scale in development viability appraisal. The paper addresses two questions concerning the application of development appraisal techniques to viability modelling within the UK planning system. The first relates to the extent to which, given intrinsic input uncertainty, the choice of model structure significantly affects model outputs. The second concerns the extent to which, given intrinsic input uncertainty, the level of model complexity significantly affects model outputs. Monte Carlo simulation procedures are applied to a hypothetical development scheme in order to measure the effects of model aggregation and structure on model output variance. It is concluded that, given the particular scheme modelled and unavoidably subjective assumptions of input variance, that simple and simplistic models may produce similar outputs to more robust and disaggregated models.
Resumo:
Recently major processor manufacturers have announced a dramatic shift in their paradigm to increase computing power over the coming years. Instead of focusing on faster clock speeds and more powerful single core CPUs, the trend clearly goes towards multi core systems. This will also result in a paradigm shift for the development of algorithms for computationally expensive tasks, such as data mining applications. Obviously, work on parallel algorithms is not new per se but concentrated efforts in the many application domains are still missing. Multi-core systems, but also clusters of workstations and even large-scale distributed computing infrastructures provide new opportunities and pose new challenges for the design of parallel and distributed algorithms. Since data mining and machine learning systems rely on high performance computing systems, research on the corresponding algorithms must be on the forefront of parallel algorithm research in order to keep pushing data mining and machine learning applications to be more powerful and, especially for the former, interactive. To bring together researchers and practitioners working in this exciting field, a workshop on parallel data mining was organized as part of PKDD/ECML 2006 (Berlin, Germany). The six contributions selected for the program describe various aspects of data mining and machine learning approaches featuring low to high degrees of parallelism: The first contribution focuses the classic problem of distributed association rule mining and focuses on communication efficiency to improve the state of the art. After this a parallelization technique for speeding up decision tree construction by means of thread-level parallelism for shared memory systems is presented. The next paper discusses the design of a parallel approach for dis- tributed memory systems of the frequent subgraphs mining problem. This approach is based on a hierarchical communication topology to solve issues related to multi-domain computational envi- ronments. The forth paper describes the combined use and the customization of software packages to facilitate a top down parallelism in the tuning of Support Vector Machines (SVM) and the next contribution presents an interesting idea concerning parallel training of Conditional Random Fields (CRFs) and motivates their use in labeling sequential data. The last contribution finally focuses on very efficient feature selection. It describes a parallel algorithm for feature selection from random subsets. Selecting the papers included in this volume would not have been possible without the help of an international Program Committee that has provided detailed reviews for each paper. We would like to also thank Matthew Otey who helped with publicity for the workshop.
Resumo:
What can explain the strong euroscepticism of radical parties of both the right and the left? This article argues that the answer lies in the paradoxical role of nationalism as a central element in both party families, motivating opposition towards European integration. Conventionally, the link between nationalism and euroscepticism is understood solely as a prerogative of radical right-wing parties, whereas radical left-wing euroscepticism is associated with opposition to the neoliberal character of the European Union.This article contests this view. It argues that nationalism cuts across party lines and constitutes the common denominator of both radical right-wing and radical left-wing euroscepticism. It adopts a mixed-methods approach, combining intensive case study analysis with quantitative analysis of party manifestos. First, it traces the link between nationalism and euroscepticism in Greece and France in order to demonstrate the internal validity of the argument. It then undertakes a cross-country statistical estimation to assess the external validity of the argument and its generalisability across Europe.
Resumo:
OBJECTIVES: The prediction of protein structure and the precise understanding of protein folding and unfolding processes remains one of the greatest challenges in structural biology and bioinformatics. Computer simulations based on molecular dynamics (MD) are at the forefront of the effort to gain a deeper understanding of these complex processes. Currently, these MD simulations are usually on the order of tens of nanoseconds, generate a large amount of conformational data and are computationally expensive. More and more groups run such simulations and generate a myriad of data, which raises new challenges in managing and analyzing these data. Because the vast range of proteins researchers want to study and simulate, the computational effort needed to generate data, the large data volumes involved, and the different types of analyses scientists need to perform, it is desirable to provide a public repository allowing researchers to pool and share protein unfolding data. METHODS: To adequately organize, manage, and analyze the data generated by unfolding simulation studies, we designed a data warehouse system that is embedded in a grid environment to facilitate the seamless sharing of available computer resources and thus enable many groups to share complex molecular dynamics simulations on a more regular basis. RESULTS: To gain insight into the conformational fluctuations and stability of the monomeric forms of the amyloidogenic protein transthyretin (TTR), molecular dynamics unfolding simulations of the monomer of human TTR have been conducted. Trajectory data and meta-data of the wild-type (WT) protein and the highly amyloidogenic variant L55P-TTR represent the test case for the data warehouse. CONCLUSIONS: Web and grid services, especially pre-defined data mining services that can run on or 'near' the data repository of the data warehouse, are likely to play a pivotal role in the analysis of molecular dynamics unfolding data.
Resumo:
Distributed and collaborative data stream mining in a mobile computing environment is referred to as Pocket Data Mining PDM. Large amounts of available data streams to which smart phones can subscribe to or sense, coupled with the increasing computational power of handheld devices motivates the development of PDM as a decision making system. This emerging area of study has shown to be feasible in an earlier study using technological enablers of mobile software agents and stream mining techniques [1]. A typical PDM process would start by having mobile agents roam the network to discover relevant data streams and resources. Then other (mobile) agents encapsulating stream mining techniques visit the relevant nodes in the network in order to build evolving data mining models. Finally, a third type of mobile agents roam the network consulting the mining agents for a final collaborative decision, when required by one or more users. In this paper, we propose the use of distributed Hoeffding trees and Naive Bayes classifers in the PDM framework over vertically partitioned data streams. Mobile policing, health monitoring and stock market analysis are among the possible applications of PDM. An extensive experimental study is reported showing the effectiveness of the collaborative data mining with the two classifers.