920 resultados para Reactive power limits of generation buses
Resumo:
This article examines the extent and limits of nonstate forms of authority in international relations. It analyzes how the information and communication technology (ICT) infrastructure for the tradability of services in a global knowledge-based economy relies on informal regulatory practices for the adjustment of ICT-related skills. By focusing on the challenge that highly volatile and short-lived cycles of demands for this type of knowledge pose for ensuring the right qualification of the labor force, the article explores how companies and associations provide training and certification programs as part of a growing market for educational services setting their own standards. The existing literature on non-conventional forms of authority in the global political economy has emphasized that the consent of actors, subject to informal rules and some form of state support, remains crucial for the effectiveness of those new forms of power. However, analyses based on a limited sample of actors tend toward a narrow understanding of the issues concerned and fail to fully explore the differentiated space in which non state authority is emerging. This article develops a three-dimensional analytical framework that brings together the scope of the issues involved, the range of nonstate actors concerned, and the spatial scope of their authority. The empirical findings highlight the limits of these new forms of nonstate authority and shed light on the role of the state and international governmental organizations in this new context.
Resumo:
Abstract : The occupational health risk involved with handling nanoparticles is the probability that a worker will experience an adverse health effect: this is calculated as a function of the worker's exposure relative to the potential biological hazard of the material. Addressing the risks of nanoparticles requires therefore knowledge on occupational exposure and the release of nanoparticles into the environment as well as toxicological data. However, information on exposure is currently not systematically collected; therefore this risk assessment lacks quantitative data. This thesis aimed at, first creating the fundamental data necessary for a quantitative assessment and, second, evaluating methods to measure the occupational nanoparticle exposure. The first goal was to determine what is being used where in Swiss industries. This was followed by an evaluation of the adequacy of existing measurement methods to assess workplace nanopaiticle exposure to complex size distributions and concentration gradients. The study was conceived as a series of methodological evaluations aimed at better understanding nanoparticle measurement devices and methods. lt focused on inhalation exposure to airborne particles, as respiration is considered to be the most important entrance pathway for nanoparticles in the body in terms of risk. The targeted survey (pilot study) was conducted as a feasibility study for a later nationwide survey on the handling of nanoparticles and the applications of specific protection means in industry. The study consisted of targeted phone interviews with health and safety officers of Swiss companies that were believed to use or produce nanoparticles. This was followed by a representative survey on the level of nanoparticle usage in Switzerland. lt was designed based on the results of the pilot study. The study was conducted among a representative selection of clients of the Swiss National Accident Insurance Fund (SUVA), covering about 85% of Swiss production companies. The third part of this thesis focused on the methods to measure nanoparticles. Several pre- studies were conducted studying the limits of commonly used measurement devices in the presence of nanoparticle agglomerates, This focus was chosen, because several discussions with users and producers of the measurement devices raised questions about their accuracy measuring nanoparticle agglomerates and because, at the same time, the two survey studies revealed that such powders are frequently used in industry. The first preparatory experiment focused on the accuracy of the scanning mobility particle sizer (SMPS), which showed an improbable size distribution when measuring powders of nanoparticle agglomerates. Furthermore, the thesis includes a series of smaller experiments that took a closer look at problems encountered with other measurement devices in the presence of nanoparticle agglomerates: condensation particle counters (CPC), portable aerosol spectrometer (PAS) a device to estimate the aerodynamic diameter, as well as diffusion size classifiers. Some initial feasibility tests for the efficiency of filter based sampling and subsequent counting of carbon nanotubes (CNT) were conducted last. The pilot study provided a detailed picture of the types and amounts of nanoparticles used and the knowledge of the health and safety experts in the companies. Considerable maximal quantities (> l'000 kg/year per company) of Ag, Al-Ox, Fe-Ox, SiO2, TiO2, and ZnO (mainly first generation particles) were declared by the contacted Swiss companies, The median quantity of handled nanoparticles, however, was 100 kg/year. The representative survey was conducted by contacting by post mail a representative selection of l '626 SUVA-clients (Swiss Accident Insurance Fund). It allowed estimation of the number of companies and workers dealing with nanoparticles in Switzerland. The extrapolation from the surveyed companies to all companies of the Swiss production sector suggested that l'309 workers (95%-confidence interval l'073 to l'545) of the Swiss production sector are potentially exposed to nanoparticles in 586 companies (145 to l'027). These numbers correspond to 0.08% (0.06% to 0.09%) of all workers and to 0.6% (0.2% to 1.1%) of companies in the Swiss production sector. To measure airborne concentrations of sub micrometre-sized particles, a few well known methods exist. However, it was unclear how well the different instruments perform in the presence of the often quite large agglomerates of nanostructured materials. The evaluation of devices and methods focused on nanoparticle agglomerate powders. lt allowed the identification of the following potential sources of inaccurate measurements at workplaces with considerable high concentrations of airborne agglomerates: - A standard SMPS showed bi-modal particle size distributions when measuring large nanoparticle agglomerates. - Differences in the range of a factor of a thousand were shown between diffusion size classifiers and CPC/SMPS. - The comparison between CPC/SMPS and portable aerosol Spectrometer (PAS) was much better, but depending on the concentration, size or type of the powders measured, the differences were still of a high order of magnitude - Specific difficulties and uncertainties in the assessment of workplaces were identified: the background particles can interact with particles created by a process, which make the handling of background concentration difficult. - Electric motors produce high numbers of nanoparticles and confound the measurement of the process-related exposure. Conclusion: The surveys showed that nanoparticles applications exist in many industrial sectors in Switzerland and that some companies already use high quantities of them. The representative survey demonstrated a low prevalence of nanoparticle usage in most branches of the Swiss industry and led to the conclusion that the introduction of applications using nanoparticles (especially outside industrial chemistry) is only beginning. Even though the number of potentially exposed workers was reportedly rather small, it nevertheless underscores the need for exposure assessments. Understanding exposure and how to measure it correctly is very important because the potential health effects of nanornaterials are not yet fully understood. The evaluation showed that many devices and methods of measuring nanoparticles need to be validated for nanoparticles agglomerates before large exposure assessment studies can begin. Zusammenfassung : Das Gesundheitsrisiko von Nanopartikel am Arbeitsplatz ist die Wahrscheinlichkeit dass ein Arbeitnehmer einen möglichen Gesundheitsschaden erleidet wenn er diesem Stoff ausgesetzt ist: sie wird gewöhnlich als Produkt von Schaden mal Exposition gerechnet. Für eine gründliche Abklärung möglicher Risiken von Nanomaterialien müssen also auf der einen Seite Informationen über die Freisetzung von solchen Materialien in die Umwelt vorhanden sein und auf der anderen Seite solche über die Exposition von Arbeitnehmenden. Viele dieser Informationen werden heute noch nicht systematisch gesarnmelt und felilen daher für Risikoanalysen, Die Doktorarbeit hatte als Ziel, die Grundlagen zu schaffen für eine quantitative Schatzung der Exposition gegenüber Nanopartikel am Arbeitsplatz und die Methoden zu evaluieren die zur Messung einer solchen Exposition nötig sind. Die Studie sollte untersuchen, in welchem Ausmass Nanopartikel bereits in der Schweizer Industrie eingesetzt werden, wie viele Arbeitnehrner damit potentiel] in Kontakt komrrien ob die Messtechnologie für die nötigen Arbeitsplatzbelastungsmessungen bereits genügt, Die Studie folcussierte dabei auf Exposition gegenüber luftgetragenen Partikel, weil die Atmung als Haupteintrittspforte iïlr Partikel in den Körper angesehen wird. Die Doktorarbeit besteht baut auf drei Phasen auf eine qualitative Umfrage (Pilotstudie), eine repräsentative, schweizerische Umfrage und mehrere technische Stndien welche dem spezitischen Verständnis der Mëglichkeiten und Grenzen einzelner Messgeräte und - teclmikeri dienen. Die qualitative Telephonumfrage wurde durchgeführt als Vorstudie zu einer nationalen und repräsentativen Umfrage in der Schweizer Industrie. Sie zielte auf Informationen ab zum Vorkommen von Nanopartikeln, und den angewendeten Schutzmassnahmen. Die Studie bestand aus gezielten Telefoninterviews mit Arbeit- und Gesundheitsfachpersonen von Schweizer Unternehmen. Die Untemehmen wurden aufgrund von offentlich zugànglichen lnformationen ausgewählt die darauf hinwiesen, dass sie mit Nanopartikeln umgehen. Der zweite Teil der Dolctorarbeit war die repräsentative Studie zur Evalniernng der Verbreitnng von Nanopaitikelanwendungen in der Schweizer lndustrie. Die Studie baute auf lnformationen der Pilotstudie auf und wurde mit einer repräsentativen Selektion von Firmen der Schweizerischen Unfall Versicherungsanstalt (SUVA) durchgeüihxt. Die Mehrheit der Schweizerischen Unternehmen im lndustrieselctor wurde damit abgedeckt. Der dritte Teil der Doktorarbeit fokussierte auf die Methodik zur Messung von Nanopartikeln. Mehrere Vorstudien wurden dnrchgefîihrt, um die Grenzen von oft eingesetzten Nanopartikelmessgeräten auszuloten, wenn sie grösseren Mengen von Nanopartikel Agglomeraten ausgesetzt messen sollen. Dieser F okns wurde ans zwei Gründen gewählt: weil mehrere Dislcussionen rnit Anwendem und auch dem Produzent der Messgeràte dort eine Schwachstelle vermuten liessen, welche Zweifel an der Genauigkeit der Messgeräte aufkommen liessen und weil in den zwei Umfragestudien ein häufiges Vorkommen von solchen Nanopartikel-Agglomeraten aufgezeigt wurde. i Als erstes widmete sich eine Vorstndie der Genauigkeit des Scanning Mobility Particle Sizer (SMPS). Dieses Messgerät zeigte in Präsenz von Nanopartikel Agglorneraten unsinnige bimodale Partikelgrössenverteilung an. Eine Serie von kurzen Experimenten folgte, welche sich auf andere Messgeräte und deren Probleme beim Messen von Nanopartikel-Agglomeraten konzentrierten. Der Condensation Particle Counter (CPC), der portable aerosol spectrometer (PAS), ein Gerät zur Schàtzung des aerodynamischen Durchniessers von Teilchen, sowie der Diffusion Size Classifier wurden getestet. Einige erste Machbarkeitstests zur Ermittlnng der Effizienz von tilterbasierter Messung von luftgetragenen Carbon Nanotubes (CNT) wnrden als letztes durchgeiührt. Die Pilotstudie hat ein detailliiertes Bild der Typen und Mengen von genutzten Nanopartikel in Schweizer Unternehmen geliefert, und hat den Stand des Wissens der interviewten Gesundheitsschntz und Sicherheitsfachleute aufgezeigt. Folgende Typen von Nanopaitikeln wurden von den kontaktierten Firmen als Maximalmengen angegeben (> 1'000 kg pro Jahr / Unternehrnen): Ag, Al-Ox, Fe-Ox, SiO2, TiO2, und ZnO (hauptsächlich Nanopartikel der ersten Generation). Die Quantitäten von eingesetzten Nanopartikeln waren stark verschieden mit einem ein Median von 100 kg pro Jahr. ln der quantitativen Fragebogenstudie wurden l'626 Unternehmen brieflich kontaktiert; allesamt Klienten der Schweizerischen Unfallversicherringsanstalt (SUVA). Die Resultate der Umfrage erlaubten eine Abschätzung der Anzahl von Unternehmen und Arbeiter, welche Nanopartikel in der Schweiz anwenden. Die Hochrechnung auf den Schweizer lndnstriesektor hat folgendes Bild ergeben: ln 586 Unternehmen (95% Vertrauensintervallz 145 bis 1'027 Unternehmen) sind 1'309 Arbeiter potentiell gegenüber Nanopartikel exponiert (95%-Vl: l'073 bis l'545). Diese Zahlen stehen für 0.6% der Schweizer Unternehmen (95%-Vl: 0.2% bis 1.1%) und 0.08% der Arbeiternehmerschaft (95%-V1: 0.06% bis 0.09%). Es gibt einige gut etablierte Technologien um die Luftkonzentration von Submikrometerpartikel zu messen. Es besteht jedoch Zweifel daran, inwiefern sich diese Technologien auch für die Messurrg von künstlich hergestellten Nanopartikeln verwenden lassen. Aus diesem Grund folcussierten die vorbereitenden Studien für die Arbeitsplatzbeurteilnngen auf die Messung von Pulverri, welche Nan0partike1-Agg10merate enthalten. Sie erlaubten die ldentifikation folgender rnöglicher Quellen von fehlerhaften Messungen an Arbeitsplätzen mit erhöhter Luft-K0nzentrati0n von Nanopartikel Agglomeratenz - Ein Standard SMPS zeigte eine unglaubwürdige bimodale Partikelgrössenverteilung wenn er grössere Nan0par'til<e1Agg10merate gemessen hat. - Grosse Unterschiede im Bereich von Faktor tausend wurden festgestellt zwischen einem Diffusion Size Classiîier und einigen CPC (beziehungsweise dem SMPS). - Die Unterschiede zwischen CPC/SMPS und dem PAS waren geringer, aber abhängig von Grosse oder Typ des gemessenen Pulvers waren sie dennoch in der Grössenordnung von einer guten Grössenordnung. - Spezifische Schwierigkeiten und Unsicherheiten im Bereich von Arbeitsplatzmessungen wurden identitiziert: Hintergrundpartikel können mit Partikeln interagieren die während einem Arbeitsprozess freigesetzt werden. Solche Interaktionen erschweren eine korrekte Einbettung der Hintergrunds-Partikel-Konzentration in die Messdaten. - Elektromotoren produzieren grosse Mengen von Nanopartikeln und können so die Messung der prozessbezogenen Exposition stören. Fazit: Die Umfragen zeigten, dass Nanopartikel bereits Realitàt sind in der Schweizer Industrie und dass einige Unternehmen bereits grosse Mengen davon einsetzen. Die repräsentative Umfrage hat diese explosive Nachricht jedoch etwas moderiert, indem sie aufgezeigt hat, dass die Zahl der Unternehmen in der gesamtschweizerischen Industrie relativ gering ist. In den meisten Branchen (vor allem ausserhalb der Chemischen Industrie) wurden wenig oder keine Anwendungen gefunden, was schliessen last, dass die Einführung dieser neuen Technologie erst am Anfang einer Entwicklung steht. Auch wenn die Zahl der potentiell exponierten Arbeiter immer noch relativ gering ist, so unterstreicht die Studie dennoch die Notwendigkeit von Expositionsmessungen an diesen Arbeitsplätzen. Kenntnisse um die Exposition und das Wissen, wie solche Exposition korrekt zu messen, sind sehr wichtig, vor allem weil die möglichen Auswirkungen auf die Gesundheit noch nicht völlig verstanden sind. Die Evaluation einiger Geräte und Methoden zeigte jedoch, dass hier noch Nachholbedarf herrscht. Bevor grössere Mess-Studien durgefîihrt werden können, müssen die Geräte und Methodem für den Einsatz mit Nanopartikel-Agglomeraten validiert werden.
Resumo:
Aerobic exercise training performed at the intensity eliciting maximal fat oxidation (Fatmax) has been shown to improve the metabolic profile of obese patients. However, limited information is available on the reproducibility of Fatmax and related physiological measures. The aim of this study was to assess the intra-individual variability of: a) Fatmax measurements determined using three different data analysis approaches and b) fat and carbohydrate oxidation rates at rest and at each stage of an individualized graded test. Fifteen healthy males [body mass index 23.1±0.6 kg/m2, maximal oxygen consumption ([Formula: see text]) 52.0±2.0 ml/kg/min] completed a maximal test and two identical submaximal incremental tests on ergocycle (30-min rest followed by 5-min stages with increments of 7.5% of the maximal power output). Fat and carbohydrate oxidation rates were determined using indirect calorimetry. Fatmax was determined with three approaches: the sine model (SIN), measured values (MV) and 3rd polynomial curve (P3). Intra-individual coefficients of variation (CVs) and limits of agreement were calculated. CV for Fatmax determined with SIN was 16.4% and tended to be lower than with P3 and MV (18.6% and 20.8%, respectively). Limits of agreement for Fatmax were -2±27% of [Formula: see text] with SIN, -4±32 with P3 and -4±28 with MV. CVs of oxygen uptake, carbon dioxide production and respiratory exchange rate were <10% at rest and <5% during exercise. Conversely, CVs of fat oxidation rates (20% at rest and 24-49% during exercise) and carbohydrate oxidation rates (33.5% at rest, 8.5-12.9% during exercise) were higher. The intra-individual variability of Fatmax and fat oxidation rates was high (CV>15%), regardless of the data analysis approach employed. Further research on the determinants of the variability of Fatmax and fat oxidation rates is required.
Resumo:
Lime sludge, an inert material mostly composed of calcium carbonate, is the result of softening hard water for distribution as drinking water. A large city such as Des Moines, Iowa, produces about 30,700 tons of lime sludge (dry weight basis) annually (Jones et al., 2005). Eight Iowa cities representing, according to the United States (U.S.) Census Bureau, 23% of the state’s population of 3 million, were surveyed. They estimated that they collectively produce 64,470 tons of lime sludge (dry weight basis) per year, and they currently have 371,800 tons (dry weight basis) stockpiled. Recently, the Iowa Department of Natural Resources directed those cities using lime softening in drinking water treatment to stop digging new lagoons to dispose of lime sludge. Five Iowa cities with stockpiles of lime sludge funded this research. The research goal was to find useful and economical alternatives for the use of lime sludge. Feasibility studies tested the efficacy of using lime sludge in cement production, power plant SOx treatment, dust control on gravel roads, wastewater neutralization, and in-fill materials for road construction. Applications using lime sludge in cement production, power plant SOx treatment, and wastewater neutralization, and as a fill material for road construction showed positive results, but the dust control application did not. Since the fill material application showed the most promise in accomplishing the project’s goal within the time limits of this research project, it was chosen for further investigation. Lime sludge is classified as inorganic silt with low plasticity. Since it only has an unconfined compressive strength of approximately 110 kPa, mixtures with fly ash and cement were developed to obtain higher strengths. When fly ash was added at a rate of 50% of the dry weight of the lime sludge, the unconfined strength increased to 1600 kPa. Further, friction angles and California Bearing Ratios were higher than those published for soils of the same classification. However, the mixtures do not perform well in durability tests. The mixtures tested did not survive 12 cycles of freezing and thawing and wetting and drying without excessive mass and volume loss. Thus, these mixtures must be placed at depths below the freezing line in the soil profile. The results demonstrated that chemically stabilized lime sludge is able to contribute bulk volume to embankments in road construction projects.
Resumo:
Pulsewidth-modulated (PWM) rectifier technology is increasingly used in industrial applications like variable-speed motor drives, since it offers several desired features such as sinusoidal input currents, controllable power factor, bidirectional power flow and high quality DC output voltage. To achieve these features,however, an effective control system with fast and accurate current and DC voltage responses is required. From various control strategies proposed to meet these control objectives, in most cases the commonly known principle of the synchronous-frame current vector control along with some space-vector PWM scheme have been applied. Recently, however, new control approaches analogous to the well-established direct torque control (DTC) method for electrical machines have also emerged to implement a high-performance PWM rectifier. In this thesis the concepts of classical synchronous-frame current control and DTC-based PWM rectifier control are combined and a new converter-flux-based current control (CFCC) scheme is introduced. To achieve sufficient dynamic performance and to ensure a stable operation, the proposed control system is thoroughly analysed and simple rules for the controller design are suggested. Special attention is paid to the estimationof the converter flux, which is the key element of converter-flux-based control. Discrete-time implementation is also discussed. Line-voltage-sensorless reactive reactive power control methods for the L- and LCL-type line filters are presented. For the L-filter an open-loop control law for the d-axis current referenceis proposed. In the case of the LCL-filter the combined open-loop control and feedback control is proposed. The influence of the erroneous filter parameter estimates on the accuracy of the developed control schemes is also discussed. A newzero vector selection rule for suppressing the zero-sequence current in parallel-connected PWM rectifiers is proposed. With this method a truly standalone and independent control of the converter units is allowed and traditional transformer isolation and synchronised-control-based solutions are avoided. The implementation requires only one additional current sensor. The proposed schemes are evaluated by the simulations and laboratory experiments. A satisfactory performance and good agreement between the theory and practice are demonstrated.
Resumo:
El procés biològic bàsic subjacent de l’envelliment va ésser avançat per la teoria de l’envelliment basada en els radicals lliures l’any 1954: la reacció dels radicals lliures actius, produïts fisiològicament en l’organisme, amb els constituents cel·lulars inicia els canvis associats a l’envelliment. La implicació dels radicals lliures en l’envelliment està relacionada amb el seu paper clau en l’origen i l’evolució de la vida. La informació disponible avui en dia ens mostra que la composició específica de les macromolècules cel·lulars (proteïnes, àcids nucleics, lípids i carbohidrats) en les espècies animals longeves tenen intrínsicament una resistència elevada a la modificació oxidativa, la qual cosa probablement contribueix a la longevitat superior d’aquestes espècies. Les espècies longeves també mostren unes taxes reduïdes de producció de radicals lliures i de lesió oxidativa. D’altra banda, la restricció dietària disminueix la producció de radicals lliures i la lesió molecular oxidativa. Aquests canvis estan directament associats a la reducció de la ingesta de proteïnes dels animals sotmesos a restricció, que alhora sembla que són deguts específicament a la reducció de la ingesta de metionina. En aquesta revisió s’emfatitza que una taxa baixa de generació de lesió endògena i una resistència intrínsecament elevada a la modificació de les macromolècules cel·lulars són trets clau de la longevitat de les espècies animals.
Resumo:
Brake wear dust is a significant component of traffic emissions and has been linked to adverse health effects. Previous research found a strong oxidative stress response in cells exposed to freshly generated brake wear dust. We characterized aged dust collected from passenger vehicles, using microscopy and elemental analyses. Reactive oxygen species (ROS) generation was measured with acellular and cellular assays using 2′7-dichlorodihydrofluorescein dye. Microscopy analyses revealed samples to be heterogeneous particle mixtures with few nanoparticles detected. Several metals, primarily iron and copper, were identified. High oxygen concentrations suggested that the elements were oxidized. ROS were detected in the cell-free fluorescent test, while exposed cells were not dramatically activated by the concentrations used. The fact that aged brake wear samples have lower oxidative stress potential than fresh ones may relate to the highly oxidized or aged state of these particles, as well as their larger size and smaller reactive surface area.
Resumo:
This thesis is composed of three main parts. The first consists of a state of the art of the different notions that are significant to understand the elements surrounding art authentication in general, and of signatures in particular, and that the author deemed them necessary to fully grasp the microcosm that makes up this particular market. Individuals with a solid knowledge of the art and expertise area, and that are particularly interested in the present study are advised to advance directly to the fourth Chapter. The expertise of the signature, it's reliability, and the factors impacting the expert's conclusions are brought forward. The final aim of the state of the art is to offer a general list of recommendations based on an exhaustive review of the current literature and given in light of all of the exposed issues. These guidelines are specifically formulated for the expertise of signatures on paintings, but can also be applied to wider themes in the area of signature examination. The second part of this thesis covers the experimental stages of the research. It consists of the method developed to authenticate painted signatures on works of art. This method is articulated around several main objectives: defining measurable features on painted signatures and defining their relevance in order to establish the separation capacities between groups of authentic and simulated signatures. For the first time, numerical analyses of painted signatures have been obtained and are used to attribute their authorship to given artists. An in-depth discussion of the developed method constitutes the third and final part of this study. It evaluates the opportunities and constraints when applied by signature and handwriting experts in forensic science. A brief summary covering each chapter allows a rapid overview of the study and summarizes the aims and main themes of each chapter. These outlines presented below summarize the aims and main themes addressed in each chapter. Part I - Theory Chapter 1 exposes legal aspects surrounding the authentication of works of art by art experts. The definition of what is legally authentic, the quality and types of the experts that can express an opinion concerning the authorship of a specific painting, and standard deontological rules are addressed. The practices applied in Switzerland will be specifically dealt with. Chapter 2 presents an overview of the different scientific analyses that can be carried out on paintings (from the canvas to the top coat). Scientific examinations of works of art have become more common, as more and more museums equip themselves with laboratories, thus an understanding of their role in the art authentication process is vital. The added value that a signature expertise can have in comparison to other scientific techniques is also addressed. Chapter 3 provides a historical overview of the signature on paintings throughout the ages, in order to offer the reader an understanding of the origin of the signature on works of art and its evolution through time. An explanation is given on the transitions that the signature went through from the 15th century on and how it progressively took on its widely known modern form. Both this chapter and chapter 2 are presented to show the reader the rich sources of information that can be provided to describe a painting, and how the signature is one of these sources. Chapter 4 focuses on the different hypotheses the FHE must keep in mind when examining a painted signature, since a number of scenarios can be encountered when dealing with signatures on works of art. The different forms of signatures, as well as the variables that may have an influence on the painted signatures, are also presented. Finally, the current state of knowledge of the examination procedure of signatures in forensic science in general, and in particular for painted signatures, is exposed. The state of the art of the assessment of the authorship of signatures on paintings is established and discussed in light of the theoretical facets mentioned previously. Chapter 5 considers key elements that can have an impact on the FHE during his or her2 examinations. This includes a discussion on elements such as the skill, confidence and competence of an expert, as well as the potential bias effects he might encounter. A better understanding of elements surrounding handwriting examinations, to, in turn, better communicate results and conclusions to an audience, is also undertaken. Chapter 6 reviews the judicial acceptance of signature analysis in Courts and closes the state of the art section of this thesis. This chapter brings forward the current issues pertaining to the appreciation of this expertise by the non- forensic community, and will discuss the increasing number of claims of the unscientific nature of signature authentication. The necessity to aim for more scientific, comprehensive and transparent authentication methods will be discussed. The theoretical part of this thesis is concluded by a series of general recommendations for forensic handwriting examiners in forensic science, specifically for the expertise of signatures on paintings. These recommendations stem from the exhaustive review of the literature and the issues exposed from this review and can also be applied to the traditional examination of signatures (on paper). Part II - Experimental part Chapter 7 describes and defines the sampling, extraction and analysis phases of the research. The sampling stage of artists' signatures and their respective simulations are presented, followed by the steps that were undertaken to extract and determine sets of characteristics, specific to each artist, that describe their signatures. The method is based on a study of five artists and a group of individuals acting as forgers for the sake of this study. Finally, the analysis procedure of these characteristics to assess of the strength of evidence, and based on a Bayesian reasoning process, is presented. Chapter 8 outlines the results concerning both the artist and simulation corpuses after their optical observation, followed by the results of the analysis phase of the research. The feature selection process and the likelihood ratio evaluation are the main themes that are addressed. The discrimination power between both corpuses is illustrated through multivariate analysis. Part III - Discussion Chapter 9 discusses the materials, the methods, and the obtained results of the research. The opportunities, but also constraints and limits, of the developed method are exposed. Future works that can be carried out subsequent to the results of the study are also presented. Chapter 10, the last chapter of this thesis, proposes a strategy to incorporate the model developed in the last chapters into the traditional signature expertise procedures. Thus, the strength of this expertise is discussed in conjunction with the traditional conclusions reached by forensic handwriting examiners in forensic science. Finally, this chapter summarizes and advocates a list of formal recommendations for good practices for handwriting examiners. In conclusion, the research highlights the interdisciplinary aspect of signature examination of signatures on paintings. The current state of knowledge of the judicial quality of art experts, along with the scientific and historical analysis of paintings and signatures, are overviewed to give the reader a feel of the different factors that have an impact on this particular subject. The temperamental acceptance of forensic signature analysis in court, also presented in the state of the art, explicitly demonstrates the necessity of a better recognition of signature expertise by courts of law. This general acceptance, however, can only be achieved by producing high quality results through a well-defined examination process. This research offers an original approach to attribute a painted signature to a certain artist: for the first time, a probabilistic model used to measure the discriminative potential between authentic and simulated painted signatures is studied. The opportunities and limits that lie within this method of scientifically establishing the authorship of signatures on works of art are thus presented. In addition, the second key contribution of this work proposes a procedure to combine the developed method into that used traditionally signature experts in forensic science. Such an implementation into the holistic traditional signature examination casework is a large step providing the forensic, judicial and art communities with a solid-based reasoning framework for the examination of signatures on paintings. The framework and preliminary results associated with this research have been published (Montani, 2009a) and presented at international forensic science conferences (Montani, 2009b; Montani, 2012).
Resumo:
Työn tarkoituksena oli analysoida polttoainesauvojen käyttäytymistä Loviisan ydinvoimalaitoksen tehonsäätöajossa. Sähkömarkkinoiden vapautuminen Pohjoismaissa sekä tämän seurauksena vaihteleva sähkön markkinahinta ovat ajaneet sähkötuottajat tilanteeseen, jossa tuotanto aiempaa enemmän mukautuu markkinatilanteeseen. Näin ollen myös Loviisan ydinvoimalaitoksen osallistuminen sähkön tuotannon säätelyyn saattaa tulevaisuudessa olla ajankohtaista. Ennen kuin reaktorin tehonsäätöajoa voidaan alkaa toteuttaa, tulee varmistua siitä, että polttoainesauvassa tehonsäätöjen seurauksena tapahtuvat muutokset eivät aiheuta epäsuotuisia käyttäytymisilmiöitä. Työssä tarkastellaan kahden Loviisan ydinvoimalaitoksen polttoainetoimittajan, British Nuclear Fuels plc:n ja venäläisen TVEL:n ensinippujen polttoainesauvan käyttäytymistä tehonsäätötapauksissa. Työssä tarkastellut tehonsäätötapaukset on pyritty valitsemaan niin, että ne kuvaisivat tulevaisuudessa mahdollisesti toteutettavia tehonsäätöjä. Laskentatapauksien sauvatehohistoriat on generoitu HEXBU-3D sydänsimulaattoriohjelmalla lasketun nelivuotisen perustehohistorian pohjalta lisäämällä säätösauvan aiheuttama reaktoritehon muutos, säätösauvan viereisen polttoainenipun aksiaalitehon muutos sekä säätösauvan rakenteen aiheuttama paikallinen tehopiikki säätösauvan vieressä. Työssä tarkastellaan tehonsäätöjen toteuttamista eri tehotasoille ja vaihtelevilla määrillä tehonsäätösyklejä. Työssä käsitellyt laskentatapaukset on jaoteltu reaktorin ajotavan mukaan seuraavasti: peruskuorma-ajo, viikonloppusäätö ja päiväsäätö. Laskenta suoritettiin ydinpolttoaineen käyttäytymistä kuvaavaa ENIGMA-B 7.3.0 ohjelmaa apuna käyttäen. Laskelmien tulokset osoittavat, että molempien polttoainetoimittajien ensinippujen sauvat kestävät reaktorin tehonsäätöajoa rajoituksetta tarkastelluissa laskentatapauksissa. ENIGMA-ohjelman sisältämät mallit, jotka ennustavat polttoainesauvan suojakuoren vaurioitumistodennäköisyyden jännityskorroosion tai väsymismurtuman kautta, eivät näytä mitään merkkejä vaurioitumisesta. BNFL:n polttoainesauva saavuttaa kuitenkin suurempia väsymismurtumatodennäköisyyden arvoja. Tämä johtuu siitä, että polttoainepelletin ja suojakuoren välinen mekaaninen vuorovaikutus syntyy BNFL:n sauvassa aikaisemmin, joka taas johtaa suurempaan määrään sauvaa rasittavia muodonmuutoksia tehonnostotilanteissa. TVEL:n Zr1%Nb -materiaalista valmistetun suojakuoren käyttäytymistä ei voida kuitenkaan suoraan näiden laskujen perusteella arvioida, sillä ENIGMA-ohjelman mallit perustuvat Zircaloy-suojakuorimateriaaleilla suoritettuihin kokeisiin.
Resumo:
L'arthrose est une maladie dégénérative des articulations due à une dégradation progressive du cartilage. La calcification de l'articulation (essentiellement due à des dépôts de cristaux de phosphate de calcium basique -cristaux BCP-) est une caractéristique de cette maladie. Cependant, le rôle des cristaux BCP reste à déterminer. Nous avons tout d'abord déterminé en utilisant des cultures primaires de chondrocytes que les cristaux de BCP induisaient la production de la cytokine IL-6, via une signalisation intracellulaire implicant les kinase Syk, PI3 et Jak et Stat3. Les cristaux de BCP induisent également la perte de protéoglycanes et l'expression de IL-6 dans des explants de cartlage humain et ces deux effets peuvent être bloqués par un inhibiteur de IL-6, le Tocilizumab. Par ailleurs, nous avons trouvé que l'IL-6 ajouté à des chondrocytes, favorisait la formation de cristax de BCP et augmentait l'expression de gènes impliqués dans le processus de minéralisation : Ank (codant pour un transporteur de pyrophooshate), Annexin5 (codant pour un canal calcique) et Pit-1 (codant pour un transporteur de phoshate). In vivo, les cristaux de BCP injectés dans l'articulation de souris induisent une érosion du cartilage. Dans un modèle murin d'arthrose du genou induit par ménisectomie, nous avons observé la formation progressive de cristaux de BCP. Fait intéressant, la présence de ces cristaux dans l'articulation précédait la destruction du cartilage. Un agent susceptible de bloquer les calcifications tel que le sodium thiosulfate (STS), administré à des souris ménisectomisées, inhibait le dépôt intra-articulaire de ces cristaux ainsi que l'érosion du cartilage. Nous avons identifié ainsi un cercle vicieux dans l'arthrose, les cristaux induisant l'interleukine-6 et l'interleukine-6 induisant la formation de ces cristaux. Nous avons étudié si on pouvait bloquer cette boucle cristaux de BCP-IL6 soit par des agents décalcifiants, soit par des inhibiteurs d'IL-6. In vitro, des anticorps anti IL- 6 ou des inhibiteurs de signalisation, inhibaient significativement IL-6 et la minéralisation induite par IL-6. De même le STS inhibait la formation de ces cristaux et la production de l'IL-6. Tout récemment, nous avons trouvé que des inhibiteurs de la xanthine oxidoréductase étaient aussi capables d'inhiber à la fois la production d'IL-6 et la minéralization des chondrocytes. Finalement, nous avons pu exclure un rôle du système IL-1 dans le modèle d'arthrose induite par ménisectomie, les souris déficientes pour IL-1a/ß, MyD88 et l'inflammasome NLRP3 n'étant pas protégées dans ce modèle d'arthrose. L'ensemble de nos résultats montre que les cristaux BCP sont pathogéniques dans l'arthrose et qu'un inhibiteur de minéralisation tel que le STS ou un inhibiteur de l'interleukine-6 constitueraient des nouvelles thérapies pour l'arthrose. -- Osteoarthritis (OA), the most common degenerative disorder of the joints, results from an imbalance between the breakdown and repair of the cartilage and surrounding articular structures. Joint calcification (essentially due to basic calcium phosphate (BCP) crystal deposition) is a characteristic feature of OA. However, the role of BCP crystal deposition in the pathogenesis of OA remains unclear[1][1]. We first demonstrated that in primary murine chondrocytes exogenous BCP crystals led to IL-6 up-modulation and that BCP crystal signaling pathways involved Syk and PI3 kinases, and also gp130 associated molecules, Jak2 and Stat3. BCP crystals also induced proteoglycan loss and IL-6 expression in human cartilage expiants, (which were significantly reduced by an IL-6 inhibitor). In addition, we found that in chondrocytes exogenous IL-6 promoted calcium-containing crystal formation and up- regulation of genes codifying for proteins involved in the calcification process: the inorganic pyrophosphate transport channel Ank, the calcium channel Annexinö and the sodium/phosphate cotransporter Piti. In vivo, BCP crystals injected into murine knee joints induced cartilage erosion. In the menisectomy model, increasing deposits, identified as BCP crystals, were progressively observed around the joint before cartilage erosion. These deposits strongly correlated with cartilage degradation and IL-6 expression. These results demonstrated that BCP crystals deposition and IL-6 production are mutually reinforcing in the osteoarthritic pathogenic process. We then investigated if we could block the BCP-IL6 loop by either targeting IL-6 production or BCP crystal deposits. Treatment of chondrocytes with anti-IL-6 antibodies or inhibitors of IL-6- signaling pathway significantly inhibited IL-6-induced crystal formation. Similarly, sodium thiosulfate (STS), a well-known systemic calcification inhibitor, decreased crystal deposition as well as HA-induced IL-6 secretion in chondrocytes and, in vivo, it decreased crystal deposits size and cartilage erosion in menisectomized knees. Interestingly, we also found that xanthine-oxidoreductase (XO) inhibitors inhibited both IL-6 production and calcium crystal depositis in chondrocytes. We began to unravel the mechanisms involved in this coordinate modulation of IL-6 and mineralization. STS inhibited Reactive Oxygen Species (ROS) generation and we are currently investigating whether XO represents a major source of ROS in chondrocyte mineralization. Finally, we ruled out that IL-1 activation/signaling plays a role in the murine model of OA induced by menisectomy, as IL-1a/ß, the IL-1 R associated molecule MyD88 and NLRP3 inflammasome deficient mice were not protected in this model of OA. Moreover TLR-1, -2, -4,-6 deficient mice had a phenotype similar to that of wild-type mice. Altogether our results demonstrated a self-amplification loop between BCP crystals deposition and IL-6 production, which represents an aggravating process in OA pathogenesis. As currently prescribed OA drugs are addressing OA symptoms,our results highlight a potential novel treatment strategy whereby inhibitors of calcium- containing crystal formation and IL-6 could be combined to form the basis of a disease modifying treatment and alter the course of OA.
Resumo:
Background: Probiotics appear to be beneficial in inflammatory bowel disease, but their mechanism of action is incompletely understood. We investigated whether probiotic-derived sphingomyelinase mediates this beneficial effect. Methodology/Principal Findings: Neutral sphingomyelinase (NSMase) activity was measured in sonicates of the probiotic L.brevis (LB)and S. thermophilus (ST) and the non-probiotic E. coli EC) and E. faecalis (EF). Lamina propria mononuclear cells (LPMC) were obtained from patients with Crohn"s disease (CD) and Ulcerative Colitis (UC), and peripheral blood mononuclear cells (PBMC) from healthy volunteers, analysing LPMC and PBMC apoptosis susceptibility, reactive oxygen species (ROS) generation and JNK activation. In some experiments, sonicates were preincubated with GSH or GW4869, a specific NSMase inhibitor. NSMase activity of LB and ST was 10-fold that of EC and EF sonicates. LB and ST sonicates induced significantly more apoptosis of CD and UC than control LPMC, whereas EC and EF sonicates failed to induce apoptosis. Pre-stimulation with anti-CD3/CD28 induced a significant and time-dependent increase in LB-induced apoptosis of LPMC and PBMC. Exposure to LB sonicates resulted in JNK activation and ROS production by LPMC. NSMase activity of LB sonicates was completely abrogated by GW4869, causing a dose-dependent reduction of LB -induced poptosis. LB and ST selectively induced immune cell apoptosis, an effect dependent on the degree of cell activation and mediated by bacterial NSMase. Conclusions: These results suggest that induction of immune cell apoptosis is a mechanism of action of some probiotics and that NSMase-mediated ceramide generation contributes to the therapeutic effects of probiotics.
Resumo:
In recent years, the network vulnerability to natural hazards has been noticed. Moreover, operating on the limits of the network transmission capabilities have resulted in major outages during the past decade. One of the reasons for operating on these limits is that the network has become outdated. Therefore, new technical solutions are studied that could provide more reliable and more energy efficient power distributionand also a better profitability for the network owner. It is the development and price of power electronics that have made the DC distribution an attractive alternative again. In this doctoral thesis, one type of a low-voltage DC distribution system is investigated. Morespecifically, it is studied which current technological solutions, used at the customer-end, could provide better power quality for the customer when compared with the current system. To study the effect of a DC network on the customer-end power quality, a bipolar DC network model is derived. The model can also be used to identify the supply parameters when the V/kW ratio is approximately known. Although the model provides knowledge of the average behavior, it is shown that the instantaneous DC voltage ripple should be limited. The guidelines to choose an appropriate capacitance value for the capacitor located at the input DC terminals of the customer-end are given. Also the structure of the customer-end is considered. A comparison between the most common solutions is made based on their cost, energy efficiency, and reliability. In the comparison, special attention is paid to the passive filtering solutions since the filter is considered a crucial element when the lifetime expenses are determined. It is found out that the filter topology most commonly used today, namely the LC filter, does not provide economical advantage over the hybrid filter structure. Finally, some of the typical control system solutions are introduced and their shortcomings are presented. As a solution to the customer-end voltage regulation problem, an observer-based control scheme is proposed. It is shown how different control system structures affect the performance. The performance meeting the requirements is achieved by using only one output measurement, when operating in a rigid network. Similar performance can be achieved in a weak grid by DC voltage measurement. An additional improvement can be achieved when an adaptive gain scheduling-based control is introduced. As a conclusion, the final power quality is determined by a sum of various factors, and the thesis provides the guidelines for designing the system that improves the power quality experienced by the customer.
Resumo:
Nowadays power drives are the essential part almost of all technological processes. Improvement of efficiency and reduction of losses require development of semiconductor switches. It has a particular meaning for the constantly growing market of renewable sources, especially for wind turbines, which demand more powerful semiconductor devices for control with growth of power. Also at present semiconductor switches are the key component in energy transmission, optimization of generation and network connection. The aim of this thesis is to make a survey of contemporary semiconductor components, showing difference in structures, advantages, disadvantages and most suitable applications. There is topical information about voltage, frequency and current limits of different switches. Study tries to compare dimensions and price of different components. Main manufacturers of semiconductor components are presented with the review of devices produced by them, and a conclusion about their availability was made. IGBT is selected as a main component in this study, because nowadays it is the most attractive component for usage in power drives, especially at the low levels of medium voltage. History of development of IGBT structure, static and dynamic characteristics are considered. Thesis tells about assemblies and connection of components and problems which can appear. One of key questions about semiconductor materials and their future development was considered. For the purpose of comparison strong and weak sides of different switches, calculation of losses of IGBT and its basic competitor – IGCT is presented. This master’s thesis makes an effort to answer the question if there are at present possibilities of accurate selection of switches for electrical drives of different rates of power and looks at future possible ways of development of semiconductor market.
Resumo:
Green IT is a term that covers various tasks and concepts that are related to reducing the environmental impact of IT. At enterprise level, Green IT has significant potential to generate sustainable cost savings: the total amount of devices is growing and electricity prices are rising. The lifecycle of a computer can be made more environmentally sustainable using Green IT, e.g. by using energy efficient components and by implementing device power management. The challenge using power management at enterprise level is how to measure and follow-up the impact of power management policies? During the thesis a power management feature was developed to a configuration management system. The feature can be used to automatically power down and power on PCs using a pre-defined schedule and to estimate the total power usage of devices. Measurements indicate that using the feature the device power consumption can be monitored quite precisely and the power consumption can be reduced, which generates electricity cost savings and reduces the environmental impact of IT.
Resumo:
Currently, a high penetration level of Distributed Generations (DGs) has been observed in the Danish distribution systems, and even more DGs are foreseen to be present in the upcoming years. How to utilize them for maintaining the security of the power supply under the emergency situations, has been of great interest for study. This master project is intended to develop a control architecture for studying purposes of distribution systems with large scale integration of solar power. As part of the EcoGrid EU Smart Grid project, it focuses on the system modelling and simulation of a Danish representative LV network located in Bornholm island. Regarding the control architecture, two types of reactive control techniques are implemented and compare. In addition, a network voltage control based on a tap changer transformer is tested. The optimized results after applying a genetic algorithm to five typical Danish domestic loads are lower power losses and voltage deviation using Q(U) control, specially with large consumptions. Finally, a communication and information exchange system is developed with the objective of regulating the reactive power and thereby, the network voltage remotely and real-time. Validation test of the simulated parameters are performed as well.