920 resultados para Service limited State
Resumo:
Introduction 1.1 Occurrence of polycyclic aromatic hydrocarbons (PAH) in the environment Worldwide industrial and agricultural developments have released a large number of natural and synthetic hazardous compounds into the environment due to careless waste disposal, illegal waste dumping and accidental spills. As a result, there are numerous sites in the world that require cleanup of soils and groundwater. Polycyclic aromatic hydrocarbons (PAHs) are one of the major groups of these contaminants (Da Silva et al., 2003). PAHs constitute a diverse class of organic compounds consisting of two or more aromatic rings with various structural configurations (Prabhu and Phale, 2003). Being a derivative of benzene, PAHs are thermodynamically stable. In addition, these chemicals tend to adhere to particle surfaces, such as soils, because of their low water solubility and strong hydrophobicity, and this results in greater persistence under natural conditions. This persistence coupled with their potential carcinogenicity makes PAHs problematic environmental contaminants (Cerniglia, 1992; Sutherland, 1992). PAHs are widely found in high concentrations at many industrial sites, particularly those associated with petroleum, gas production and wood preserving industries (Wilson and Jones, 1993). 1.2 Remediation technologies Conventional techniques used for the remediation of soil polluted with organic contaminants include excavation of the contaminated soil and disposal to a landfill or capping - containment - of the contaminated areas of a site. These methods have some drawbacks. The first method simply moves the contamination elsewhere and may create significant risks in the excavation, handling and transport of hazardous material. Additionally, it is very difficult and increasingly expensive to find new landfill sites for the final disposal of the material. The cap and containment method is only an interim solution since the contamination remains on site, requiring monitoring and maintenance of the isolation barriers long into the future, with all the associated costs and potential liability. A better approach than these traditional methods is to completely destroy the pollutants, if possible, or transform them into harmless substances. Some technologies that have been used are high-temperature incineration and various types of chemical decomposition (for example, base-catalyzed dechlorination, UV oxidation). However, these methods have significant disadvantages, principally their technological complexity, high cost , and the lack of public acceptance. Bioremediation, on the contrast, is a promising option for the complete removal and destruction of contaminants. 1.3 Bioremediation of PAH contaminated soil & groundwater Bioremediation is the use of living organisms, primarily microorganisms, to degrade or detoxify hazardous wastes into harmless substances such as carbon dioxide, water and cell biomass Most PAHs are biodegradable unter natural conditions (Da Silva et al., 2003; Meysami and Baheri, 2003) and bioremediation for cleanup of PAH wastes has been extensively studied at both laboratory and commercial levels- It has been implemented at a number of contaminated sites, including the cleanup of the Exxon Valdez oil spill in Prince William Sound, Alaska in 1989, the Mega Borg spill off the Texas coast in 1990 and the Burgan Oil Field, Kuwait in 1994 (Purwaningsih, 2002). Different strategies for PAH bioremediation, such as in situ , ex situ or on site bioremediation were developed in recent years. In situ bioremediation is a technique that is applied to soil and groundwater at the site without removing the contaminated soil or groundwater, based on the provision of optimum conditions for microbiological contaminant breakdown.. Ex situ bioremediation of PAHs, on the other hand, is a technique applied to soil and groundwater which has been removed from the site via excavation (soil) or pumping (water). Hazardous contaminants are converted in controlled bioreactors into harmless compounds in an efficient manner. 1.4 Bioavailability of PAH in the subsurface Frequently, PAH contamination in the environment is occurs as contaminants that are sorbed onto soilparticles rather than in phase (NAPL, non aqueous phase liquids). It is known that the biodegradation rate of most PAHs sorbed onto soil is far lower than rates measured in solution cultures of microorganisms with pure solid pollutants (Alexander and Scow, 1989; Hamaker, 1972). It is generally believed that only that fraction of PAHs dissolved in the solution can be metabolized by microorganisms in soil. The amount of contaminant that can be readily taken up and degraded by microorganisms is defined as bioavailability (Bosma et al., 1997; Maier, 2000). Two phenomena have been suggested to cause the low bioavailability of PAHs in soil (Danielsson, 2000). The first one is strong adsorption of the contaminants to the soil constituents which then leads to very slow release rates of contaminants to the aqueous phase. Sorption is often well correlated with soil organic matter content (Means, 1980) and significantly reduces biodegradation (Manilal and Alexander, 1991). The second phenomenon is slow mass transfer of pollutants, such as pore diffusion in the soil aggregates or diffusion in the organic matter in the soil. The complex set of these physical, chemical and biological processes is schematically illustrated in Figure 1. As shown in Figure 1, biodegradation processes are taking place in the soil solution while diffusion processes occur in the narrow pores in and between soil aggregates (Danielsson, 2000). Seemingly contradictory studies can be found in the literature that indicate the rate and final extent of metabolism may be either lower or higher for sorbed PAHs by soil than those for pure PAHs (Van Loosdrecht et al., 1990). These contrasting results demonstrate that the bioavailability of organic contaminants sorbed onto soil is far from being well understood. Besides bioavailability, there are several other factors influencing the rate and extent of biodegradation of PAHs in soil including microbial population characteristics, physical and chemical properties of PAHs and environmental factors (temperature, moisture, pH, degree of contamination). Figure 1: Schematic diagram showing possible rate-limiting processes during bioremediation of hydrophobic organic contaminants in a contaminated soil-water system (not to scale) (Danielsson, 2000). 1.5 Increasing the bioavailability of PAH in soil Attempts to improve the biodegradation of PAHs in soil by increasing their bioavailability include the use of surfactants , solvents or solubility enhancers.. However, introduction of synthetic surfactant may result in the addition of one more pollutant. (Wang and Brusseau, 1993).A study conducted by Mulder et al. showed that the introduction of hydropropyl-ß-cyclodextrin (HPCD), a well-known PAH solubility enhancer, significantly increased the solubilization of PAHs although it did not improve the biodegradation rate of PAHs (Mulder et al., 1998), indicating that further research is required in order to develop a feasible and efficient remediation method. Enhancing the extent of PAHs mass transfer from the soil phase to the liquid might prove an efficient and environmentally low-risk alternative way of addressing the problem of slow PAH biodegradation in soil.
Resumo:
Deformability is often a crucial to the conception of many civil-engineering structural elements. Also, design is all the more burdensome if both long- and short-term deformability has to be considered. In this thesis, long- and short-term deformability has been studied from the material and the structural modelling point of view. Moreover, two materials have been handled: pultruded composites and concrete. A new finite element model for thin-walled beams has been introduced. As a main assumption, cross-sections rigid are considered rigid in their plane; this hypothesis replaces that of the classical beam theory of plane cross-sections in the deformed state. That also allows reducing the total number of degrees of freedom, and therefore making analysis faster compared with twodimensional finite elements. Longitudinal direction warping is left free, allowing describing phenomena such as the shear lag. The new finite-element model has been first applied to concrete thin-walled beams (such as roof high span girders or bridge girders) subject to instantaneous service loadings. Concrete in his cracked state has been considered through a smeared crack model for beams under bending. At a second stage, the FE-model has been extended to the viscoelastic field and applied to pultruded composite beams under sustained loadings. The generalized Maxwell model has been adopted. As far as materials are concerned, long-term creep tests have been carried out on pultruded specimens. Both tension and shear tests have been executed. Some specimen has been strengthened with carbon fibre plies to reduce short- and long- term deformability. Tests have been done in a climate room and specimens kept 2 years under constant load in time. As for concrete, a model for tertiary creep has been proposed. The basic idea is to couple the UMLV linear creep model with a damage model in order to describe nonlinearity. An effective strain tensor, weighting the total and the elasto-damaged strain tensors, controls damage evolution through the damage loading function. Creep strains are related to the effective stresses (defined by damage models) and so associated to the intact material.
Resumo:
In the recent decade, the request for structural health monitoring expertise increased exponentially in the United States. The aging issues that most of the transportation structures are experiencing can put in serious jeopardy the economic system of a region as well as of a country. At the same time, the monitoring of structures is a central topic of discussion in Europe, where the preservation of historical buildings has been addressed over the last four centuries. More recently, various concerns arose about security performance of civil structures after tragic events such the 9/11 or the 2011 Japan earthquake: engineers looks for a design able to resist exceptional loadings due to earthquakes, hurricanes and terrorist attacks. After events of such a kind, the assessment of the remaining life of the structure is at least as important as the initial performance design. Consequently, it appears very clear that the introduction of reliable and accessible damage assessment techniques is crucial for the localization of issues and for a correct and immediate rehabilitation. The System Identification is a branch of the more general Control Theory. In Civil Engineering, this field addresses the techniques needed to find mechanical characteristics as the stiffness or the mass starting from the signals captured by sensors. The objective of the Dynamic Structural Identification (DSI) is to define, starting from experimental measurements, the modal fundamental parameters of a generic structure in order to characterize, via a mathematical model, the dynamic behavior. The knowledge of these parameters is helpful in the Model Updating procedure, that permits to define corrected theoretical models through experimental validation. The main aim of this technique is to minimize the differences between the theoretical model results and in situ measurements of dynamic data. Therefore, the new model becomes a very effective control practice when it comes to rehabilitation of structures or damage assessment. The instrumentation of a whole structure is an unfeasible procedure sometimes because of the high cost involved or, sometimes, because it’s not possible to physically reach each point of the structure. Therefore, numerous scholars have been trying to address this problem. In general two are the main involved methods. Since the limited number of sensors, in a first case, it’s possible to gather time histories only for some locations, then to move the instruments to another location and replay the procedure. Otherwise, if the number of sensors is enough and the structure does not present a complicate geometry, it’s usually sufficient to detect only the principal first modes. This two problems are well presented in the works of Balsamo [1] for the application to a simple system and Jun [2] for the analysis of system with a limited number of sensors. Once the system identification has been carried, it is possible to access the actual system characteristics. A frequent practice is to create an updated FEM model and assess whether the structure fulfills or not the requested functions. Once again the objective of this work is to present a general methodology to analyze big structure using a limited number of instrumentation and at the same time, obtaining the most information about an identified structure without recalling methodologies of difficult interpretation. A general framework of the state space identification procedure via OKID/ERA algorithm is developed and implemented in Matlab. Then, some simple examples are proposed to highlight the principal characteristics and advantage of this methodology. A new algebraic manipulation for a prolific use of substructuring results is developed and implemented.
Resumo:
The research is part of a survey for the detection of the hydraulic and geotechnical conditions of river embankments funded by the Reno River Basin Regional Technical Service of the Region Emilia-Romagna. The hydraulic safety of the Reno River, one of the main rivers in North-Eastern Italy, is indeed of primary importance to the Emilia-Romagna regional administration. The large longitudinal extent of the banks (several hundreds of kilometres) has placed great interest in non-destructive geophysical methods, which, compared to other methods such as drilling, allow for the faster and often less expensive acquisition of high-resolution data. The present work aims to experience the Ground Penetrating Radar (GPR) for the detection of local non-homogeneities (mainly stratigraphic contacts, cavities and conduits) inside the Reno River and its tributaries embankments, taking into account supplementary data collected with traditional destructive tests (boreholes, cone penetration tests etc.). A comparison with non-destructive methodologies likewise electric resistivity tomography (ERT), Multi-channels Analysis of Surface Waves (MASW), FDEM induction, was also carried out in order to verify the usability of GPR and to provide integration of various geophysical methods in the process of regular maintenance and check of the embankments condition. The first part of this thesis is dedicated to the explanation of the state of art concerning the geographic, geomorphologic and geotechnical characteristics of Reno River and its tributaries embankments, as well as the description of some geophysical applications provided on embankments belonging to European and North-American Rivers, which were used as bibliographic basis for this thesis realisation. The second part is an overview of the geophysical methods that were employed for this research, (with a particular attention to the GPR), reporting also their theoretical basis and a deepening of some techniques of the geophysical data analysis and representation, when applied to river embankments. The successive chapters, following the main scope of this research that is to highlight advantages and drawbacks in the use of Ground Penetrating Radar applied to Reno River and its tributaries embankments, show the results obtained analyzing different cases that could yield the formation of weakness zones, which successively lead to the embankment failure. As advantages, a considerable velocity of acquisition and a spatial resolution of the obtained data, incomparable with respect to other methodologies, were recorded. With regard to the drawbacks, some factors, related to the attenuation losses of wave propagation, due to different content in clay, silt, and sand, as well as surface effects have significantly limited the correlation between GPR profiles and geotechnical information and therefore compromised the embankment safety assessment. Recapitulating, the Ground Penetrating Radar could represent a suitable tool for checking up river dike conditions, but its use has significantly limited by geometric and geotechnical characteristics of the Reno River and its tributaries levees. As a matter of facts, only the shallower part of the embankment was investigate, achieving also information just related to changes in electrical properties, without any numerical measurement. Furthermore, GPR application is ineffective for a preliminary assessment of embankment safety conditions, while for detailed campaigns at shallow depth, which aims to achieve immediate results with optimal precision, its usage is totally recommended. The cases where multidisciplinary approach was tested, reveal an optimal interconnection of the various geophysical methodologies employed, producing qualitative results concerning the preliminary phase (FDEM), assuring quantitative and high confidential description of the subsoil (ERT) and finally, providing fast and highly detailed analysis (GPR). Trying to furnish some recommendations for future researches, the simultaneous exploitation of many geophysical devices to assess safety conditions of river embankments is absolutely suggested, especially to face reliable flood event, when the entire extension of the embankments themselves must be investigated.
Resumo:
This thesis is a collection of essays related to the topic of innovation in the service sector. The choice of this structure is functional to the purpose of single out some of the relevant issues and try to tackle them, revising first the state of the literature and then proposing a way forward. Three relevant issues has been therefore selected: (i) the definition of innovation in the service sector and the connected question of measurement of innovation; (ii) the issue of productivity in services; (iii) the classification of innovative firms in the service sector. Facing the first issue, chapter II shows how the initial width of the original Schumpeterian definition of innovation has been narrowed and then passed to the service sector form the manufacturing one in a reduce technological form. Chapter III tackle the issue of productivity in services, discussing the difficulties for measuring productivity in a context where the output is often immaterial. We reconstruct the dispute on the Baumol’s cost disease argument and propose two different ways to go forward in the research on productivity in services: redefining the output along the line of a characteristic approach; and redefining the inputs, particularly analysing which kind of input it’s worth saving. Chapter IV derives an integrated taxonomy of innovative service and manufacturing firms, using data coming from the 2008 CIS survey for Italy. This taxonomy is based on the enlarged definition of “innovative firm” deriving from the Schumpeterian definition of innovation and classify firms using a cluster analysis techniques. The result is the emergence of a four cluster solution, where firms are differentiated by the breadth of the innovation activities in which they are involved. Chapter 5 reports some of the main conclusions of each singular previous chapter and the points worth of further research in the future.
Resumo:
Spinal cord injury (SCI) results not only in paralysis; but it is also associated with a range of autonomic dysregulation that can interfere with cardiovascular, bladder, bowel, temperature, and sexual function. The entity of the autonomic dysfunction is related to the level and severity of injury to descending autonomic (sympathetic) pathways. For many years there was limited awareness of these issues and the attention given to them by the scientific and medical community was scarce. Yet, even if a new system to document the impact of SCI on autonomic function has recently been proposed, the current standard of assessment of SCI (American Spinal Injury Association (ASIA) examination) evaluates motor and sensory pathways, but not severity of injury to autonomic pathways. Beside the severe impact on quality of life, autonomic dysfunction in persons with SCI is associated with increased risk of cardiovascular disease and mortality. Therefore, obtaining information regarding autonomic function in persons with SCI is pivotal and clinical examinations and laboratory evaluations to detect the presence of autonomic dysfunction and quantitate its severity are mandatory. Furthermore, previous studies demonstrated that there is an intimate relationship between the autonomic nervous system and sleep from anatomical, physiological, and neurochemical points of view. Although, even if previous epidemiological studies demonstrated that sleep problems are common in spinal cord injury (SCI), so far only limited polysomnographic (PSG) data are available. Finally, until now, circadian and state dependent autonomic regulation of blood pressure (BP), heart rate (HR) and body core temperature (BcT) were never assessed in SCI patients. Aim of the current study was to establish the association between the autonomic control of the cardiovascular function and thermoregulation, sleep parameters and increased cardiovascular risk in SCI patients.
Resumo:
Nel primo capitolo si è studiata la nuova tecnologia del Cloud Computing, fornendo una semplice analisi di tutte le caratteristiche principali, gli attori coinvolti e i relativi metodi di distribuzione e servizi offerti. Nel secondo capitolo si è introdotta la nozione di coordination as a service, discutendone le relative astrazioni che compongono l'architettura logica. Successivamente si è preso in considerazione il modello di coordinazione TuCSoN definendo cosa si intende per nodo, agente, centro di tuple e agent coordination context ed è stato analizzato il relativo linguaggio di coordinazione attraverso il quale essi interagiscono. Nel terzo capitolo sono state riviste ed estese le nozioni di TuCSoN, precedentemente acquisite, nell'ambito del Cloud Computing ed è stato fornito un modello astratto ed una possibile architettura di TuCSoN in the Cloud. Sono stati analizzati anche gli aspetti di un possibile servizio di tale genere nello scenario di servizio pay-per-use. Infine nel quarto ed ultimo capitolo si è sviluppato un caso di studio in cui si è implementata un'interfaccia per l'attuale CLI di TuCSoN sottoforma di applet, che è stata poi inserita nel Cloud attraverso la piattaforma PaaS Cloudify.
Resumo:
The wide diffusion of cheap, small, and portable sensors integrated in an unprecedented large variety of devices and the availability of almost ubiquitous Internet connectivity make it possible to collect an unprecedented amount of real time information about the environment we live in. These data streams, if properly and timely analyzed, can be exploited to build new intelligent and pervasive services that have the potential of improving people's quality of life in a variety of cross concerning domains such as entertainment, health-care, or energy management. The large heterogeneity of application domains, however, calls for a middleware-level infrastructure that can effectively support their different quality requirements. In this thesis we study the challenges related to the provisioning of differentiated quality-of-service (QoS) during the processing of data streams produced in pervasive environments. We analyze the trade-offs between guaranteed quality, cost, and scalability in streams distribution and processing by surveying existing state-of-the-art solutions and identifying and exploring their weaknesses. We propose an original model for QoS-centric distributed stream processing in data centers and we present Quasit, its prototype implementation offering a scalable and extensible platform that can be used by researchers to implement and validate novel QoS-enforcement mechanisms. To support our study, we also explore an original class of weaker quality guarantees that can reduce costs when application semantics do not require strict quality enforcement. We validate the effectiveness of this idea in a practical use-case scenario that investigates partial fault-tolerance policies in stream processing by performing a large experimental study on the prototype of our novel LAAR dynamic replication technique. Our modeling, prototyping, and experimental work demonstrates that, by providing data distribution and processing middleware with application-level knowledge of the different quality requirements associated to different pervasive data flows, it is possible to improve system scalability while reducing costs.
Resumo:
I tetti verdi rappresentano, sempre più frequentemente, una tecnologia idonea alla mitigazione alle problematiche connesse all’ urbanizzazione, tuttavia la conoscenza delle prestazioni dei GR estensivi in clima sub-Mediterraneo è ancora limitata. La presente ricerca è supportata da 15 mesi di analisi sperimentali su due GR situati presso la Scuola di Ingegneria di Bologna. Inizialmente vengono comparate, tra loro e rispetto a una superficie di riferimento (RR), le prestazioni idrologiche ed energetiche dei due GR, caratterizzati da vegetazione a Sedum (SR) e a erbe native perenni (NR). Entrambi riducono i volumi defluiti e le temperature superficiali. Il NR si dimostra migliore del SR sia in campo idrologico che termico, la fisiologia della vegetazione del NR determina l'apertura diurna degli stomi e conseguentemente una maggiore evapotraspirazione (ET). Successivamente si sono studiate la variazioni giornaliere di umidità nel substrato del SR riscontrando che la loro ampiezza è influenzata dalla temperatura, dall’umidità iniziale e dalla fase vegetativa. Queste sono state simulate mediante un modello idrologico basato sull'equazione di bilancio idrico e su due modelli convenzionali per la stima della ET potenziale combinati con una funzione di estrazione dell’ umidità dal suolo. Sono stati proposti dei coefficienti di correzione, ottenuti per calibrazione, per considerare le differenze tra la coltura di riferimento e le colture nei GR durante le fasi di crescita. Infine, con l’ausilio di un modello implementato in SWMM 5.1. 007 utilizzando il modulo Low Impact Development (LID) durante simulazioni in continuo (12 mesi) si sono valutate le prestazioni in termini di ritenzione dei plot SR e RR. Il modello, calibrato e validato, mostra di essere in grado di riprodurre in modo soddisfacente i volumi defluiti dai due plot. Il modello, a seguito di una dettagliata calibrazione, potrebbe supportare Ingegneri e Amministrazioni nella valutazioni dei vantaggi derivanti dall'utilizzo dei GR.
Resumo:
Mobile devices are now capable of supporting a wide range of applications, many of which demand an ever increasing computational power. To this end, mobile cloud computing (MCC) has been proposed to address the limited computation power, memory, storage, and energy of such devices. An important challenge in MCC is to guarantee seamless discovery of services. To this end, this thesis proposes an architecture that provides user-transparent and low-latency service discovery, as well as automated service selection. Experimental results on a real cloud computing testbed demonstrated that the proposed work outperforms state of-the-art approaches by achieving extremely low discovery delay.
Resumo:
Cognitive task performance differs considerably between individuals. Besides cognitive capacities, attention might be a source of such differences. The individual's EEG alpha frequency (IAF) is a putative marker of the subject's state of arousal and attention, and was found to be associated with task performance and cognitive capacities. However, little is known about the metabolic substrate (i.e. the network) underlying IAF. Here we aimed to identify this network. Correlation of IAF with regional Cerebral Blood Flow (rCBF) in fifteen young healthy subjects revealed a network of brain areas that are associated with the modulation of attention and preparedness for external input, which are relevant for task execution. We hypothesize that subjects with higher IAF have pre-activated task-relevant networks and thus are both more efficient in the task-execution, and show a reduced fMRI-BOLD response to the stimulus, not because the absolute amount of activation is smaller, but because the additional activation by processing of external input is limited due to the higher baseline.
Resumo:
Context During the past 2 decades, a major transition in the clinical characterization of psychotic disorders has occurred. The construct of a clinical high-risk (HR) state for psychosis has evolved to capture the prepsychotic phase, describing people presenting with potentially prodromal symptoms. The importance of this HR state has been increasingly recognized to such an extent that a new syndrome is being considered as a diagnostic category in the DSM-5. Objective To reframe the HR state in a comprehensive state-of-the-art review on the progress that has been made while also recognizing the challenges that remain. Data Sources Available HR research of the past 20 years from PubMed, books, meetings, abstracts, and international conferences. Study Selection and Data Extraction Critical review of HR studies addressing historical development, inclusion criteria, epidemiologic research, transition criteria, outcomes, clinical and functional characteristics, neurocognition, neuroimaging, predictors of psychosis development, treatment trials, socioeconomic aspects, nosography, and future challenges in the field. Data Synthesis Relevant articles retrieved in the literature search were discussed by a large group of leading worldwide experts in the field. The core results are presented after consensus and are summarized in illustrative tables and figures. Conclusions The relatively new field of HR research in psychosis is exciting. It has the potential to shed light on the development of major psychotic disorders and to alter their course. It also provides a rationale for service provision to those in need of help who could not previously access it and the possibility of changing trajectories for those with vulnerability to psychotic illnesses.
Resumo:
Through studying German, Polish and Czech publications on Silesia, Mr. Kamusella found that most of them, instead of trying to objectively analyse the past, are devoted to proving some essential "Germanness", "Polishness" or "Czechness" of this region. He believes that the terminology and thought-patterns of nationalist ideology are so deeply entrenched in the minds of researchers that they do not consider themselves nationalist. However, he notes that, due to the spread of the results of the latest studies on ethnicity/nationalism (by Gellner, Hobsbawm, Smith, Erikson Buillig, amongst others), German publications on Silesia have become quite objective since the 1980s, and the same process (impeded by under funding) has been taking place in Poland and the Czech Republic since 1989. His own research totals some 500 pages, in English, presented on disc. So what are the traps into which historians have been inclined to fall? There is a tendency for them to treat Silesia as an entity which has existed forever, though Mr. Kamusella points out that it emerged as a region only at the beginning of the 11th century. These same historians speak of Poles, Czechs and Germans in Silesia, though Mr. Kamusella found that before the mid-19th century, identification was with an inhabitant's local area, religion or dynasty. In fact, a German national identity started to be forged in Prussian Silesia only during the Liberation War against Napoleon (1813-1815). It was concretised in 1861 in the form of the first Prussian census, when the language a citizen spoke was equated with his/her nationality. A similar census was carried out in Austrian Silesia only in 1881. The censuses forced the Silesians to choose their nationality despite their multiethnic multicultural identities. It was the active promotion of a German identity in Prussian Silesia, and Vienna's uneasy acceptance of the national identities in Austrian Silesia which stimulated the development of Polish national, Moravian ethnic and Upper Silesian ethnic regional identities in Upper Silesia, and Polish national, Czech national, Moravian ethnic and Silesian ethnic identities in Austrian Silesia. While traditional historians speak of the "nationalist struggle" as though it were a permanent characteristic of Silesia, Mr. Kamusella points out that such a struggle only developed in earnest after 1918. What is more, he shows how it has been conveniently forgotten that, besides the national players, there were also significant ethnic movements of Moravians, Upper Silesians, Silesians and the tutejsi (i.e. those who still chose to identify with their locality). At this point Mr. Kamusella moves into the area of linguistics. While traditionally historians have spoken of the conflicts between the three national languages (German, Polish and Czech), Mr Kamusella reminds us that the standardised forms of these languages, which we choose to dub "national", were developed only in the mid-18th century, after 1869 (when Polish became the official language in Galicia), and after the 1870s (when Czech became the official language in Bohemia). As for standard German, it was only widely promoted in Silesia from the mid 19th century onwards. In fact, the majority of the population of Prussian Upper Silesia and Austrian Silesia were bi- or even multilingual. What is more, the "Polish" and "Czech" Silesians spoke were not the standard languages we know today, but a continuum of West-Slavic dialects in the countryside and a continuum of West-Slavic/German creoles in the urbanised areas. Such was the linguistic confusion that, from time to time, some ethnic/regional and Church activists strove to create a distinctive Upper Silesian/Silesian language on the basis of these dialects/creoles, but their efforts were thwarted by the staunch promotion of standard German, and after 1918, of standard Polish and Czech. Still on the subject of language, Mr. Kamusella draws attention to a problem around the issue of place names and personal names. Polish historians use current Polish versions of the Silesian place names, Czechs use current Polish/Czech versions of the place names, and Germans use the German versions which were in use in Silesia up to 1945. Mr. Kamusella attempted to avoid this, as he sees it, nationalist tendency, by using an appropriate version of a place name for a given period and providing its modern counterpart in parentheses. In the case of modern place names he gives the German version in parentheses. As for the name of historical figures, he strove to use the name entered on the birth certificate of the person involved, and by doing so avoid such confusion as, for instance, surrounds the Austrian Silesian pastor L.J. Sherschnik, who in German became Scherschnick, in Polish, Szersznik, and in Czech, Sersnik. Indeed, the prospective Silesian scholar should, Mr. Kamusella suggests, as well as the three languages directly involved in the area itself, know English and French, since many documents and books on the subject have been published in these languages, and even Latin, when dealing in depth with the period before the mid-19th century. Mr. Kamusella divides the policies of ethnic cleansing into two categories. The first he classifies as soft, meaning that policy is confined to the educational system, army, civil service and the church, and the aim is that everyone learn the language of the dominant group. The second is the group of hard policies, which amount to what is popularly labelled as ethnic cleansing. This category of policy aims at the total assimilation and/or physical liquidation of the non-dominant groups non-congruent with the ideal of homogeneity of a given nation-state. Mr. Kamusella found that soft policies were consciously and systematically employed by Prussia/Germany in Prussian Silesia from the 1860s to 1918, whereas in Austrian Silesia, Vienna quite inconsistently dabbled in them from the 1880s to 1917. In the inter-war period, the emergence of the nation-states of Poland and Czechoslovakia led to full employment of the soft policies and partial employment of the hard ones (curbed by the League of Nations minorities protection system) in Czechoslovakian Silesia, German Upper Silesia and the Polish parts of Upper and Austrian Silesia. In 1939-1945, Berlin started consistently using all the "hard" methods to homogenise Polish and Czechoslovakian Silesia which fell, in their entirety, within the Reich's borders. After World War II Czechoslovakia regained its prewar part of Silesia while Poland was given its prewar section plus almost the whole of the prewar German province. Subsequently, with the active involvement and support of the Soviet Union, Warsaw and Prague expelled the majority of Germans from Silesia in 1945-1948 (there were also instances of the Poles expelling Upper Silesian Czechs/Moravians, and of the Czechs expelling Czech Silesian Poles/pro-Polish Silesians). During the period of communist rule, the same two countries carried out a thorough Polonisation and Czechisation of Silesia, submerging this region into a new, non-historically based administrative division. Democratisation in the wake of the fall of communism, and a gradual retreat from the nationalist ideal of the homogeneous nation-state with a view to possible membership of the European Union, caused the abolition of the "hard" policies and phasing out of the "soft" ones. Consequently, limited revivals of various ethnic/national minorities have been observed in Czech and Polish Silesia, whereas Silesian regionalism has become popular in the westernmost part of Silesia which remained part of Germany. Mr. Kamusella believes it is possible that, with the overcoming of the nation-state discourse in European politics, when the expression of multiethnicity and multilingualism has become the cause of the day in Silesia, regionalism will hold sway in this region, uniting its ethnically/nationally variegated population in accordance with the principle of subsidiarity championed by the European Union.
Resumo:
The main goal of this project was to propose appropriate methods of analysing the effects of the privatisation of state-owned enterprises, methods which were then tested on a limited sample of 16 Polish and 8 German enterprises privatised in 1992. A considerable amount of information was collected relating to the six-year period 1989-1994 relating to most aspects of the companies' activities. The effects of privatisation were taken to be those changes within the enterprises which were the result of privatisation, in such areas as production, the productivity of labour and fixed assets, investments and innovations, employment and wages, economic incentives (especially for top managers), financing (internal and external sources), bad debts and economic effects (financial analysis). A second important goal was to identify the main factors which represent methodological obstacles in surveys of the effects of privatisation during a period of fundamental transformation of the entire economic system. The list of enterprises for the research was compiled in such a way as to allow for the differentiation of ownership structures of privatised firms and to permit (at least to a certain extent) the empirical verification of some hypotheses regarding the privatisation process. The enterprises selected were divided into the following three groups representing (as far as possible) various types of ownership structures or types of control: (1) enterprises control by strategic investors (domestic or foreign), (2) enterprises controlled by employees (employee-owned companies), (3) enterprises controlled by managers. Formal methods such as econometric models with varying parameters were used to separate pure privatisation effects from other factors which influence various aspects of an enterprise's working, including policies on the productivity of labour and capital, average wages, the remuneration of top managers, etc. While the group admits that their findings and conclusions cannot be treated as representative of all privatised enterprises in Poland and Germany, they found considerable convergence with their findings and those of other surveys conducted on a wider scale. The main hypotheses that were confirmed included that privatisation (especially in companies controlled by large investors and managers) leads to a significant increase in the effectiveness of these production process, growing pay differentials between different employee groups (e.g. between executives and rank-and-file employees) and between different jobs and positions within particular professional groups. They also confirmed the growing importance in incentives to top executives of incentives linked with the company's economic effects (particularly profit-related incentives), long-term incentives and the capital market.
Resumo:
A considerable portion of public lands in the United States is at risk of uncharacteristically severe wildfires due to a history of fire suppression. Wildfires already have detrimental impacts on the landscape and on communities in the wildland-urban interface (WUI) due to unnatural and overstocked forests. Strategies to mitigate wildfire risk include mechanical thinning and prescribed burning in areas with high wildfire risk. The material removed is often of little or no economic value. Woody biomass utilization (WBU) could offset the costs of hazardous fuel treatments if removed material could be used for wood products, heat, or electricity production. However, barriers due to transportation costs, removal costs, and physical constraints (such as steep slopes) hinder woody biomass utilization. Various federal and state policies attempt to overcome these barriers. WBU has the potential to aid in wildfire mitigation and meet growing state mandates for renewable energy. This research utilizes interview data from individuals involved with on-the-ground woody biomass removal and utilization to determine how federal and state policies influence woody biomass utilization. Results suggest that there is not one over-arching policy that hinders or promotes woody biomass utilization, but rather woody biomass utilization is hindered by organizational constraints related to time, cost, and quality of land management agencies’ actions. However, the use of stewardship contracting (a hybrid timber sale and service contract) shows promise for increased WBU, especially in states with favorable tax policies and renewable energy mandates. Policy recommendations to promote WBU include renewal of stewardship contracting legislations and a re-evaluation of land cover types suited for WBU. Potential future policies to consider include the indirect role of carbon dioxide emission reduction activities to promote wood energy and future impacts of air quality regulations.