893 resultados para Operating room technicians.
Resumo:
In this study, the measurement of the concentration and size of particles and the identification of their sources were carried out at five orthopedic surgeries. The aerosol concentration and particle size distribution, ranging from 0.3 mu m 10 mu m, were measured and related to the type of indoor activity. The handling of surgical linen and gowns, handling of the patient, use of electrosurgical apparatus, use of a bone saw, handling of equipment, and cleaning of the room were identified as the most important sources of particles, with each of these activities posing different risks to the health of the patients and workers. The results showed that most of the particles were above 0.5 mu m and that there was a strong correlation among all particles of sizes above 1 mu m. Particles with diameters in the range of 0.3 mu m-0.5 mu m had a good correlation only with particles in the ranges of 0.5 mu m-1.0 mu m and 1.0 mu m-3.0 mu m in three of the surgeries analyzed. Findings led to the conclusion that most of the events responsible for generating aerosol particles in an orthopedic surgery room are brief, intermittent, and highly variable, thus requiring the use of specific instrumentation for their continuous identification and characterization.
Resumo:
The mixed ruthenium(II) complexes trans-[RuCl(2)(PPh(3))(2)(bipy)] (1), trans-[RuCl(2)(PPh(3))(2)(Me(2)bipy)](2), cis-[RuCl(2)(dcype)(bipy)](3), cis-[RuCl(2)(dcype)(Me(2)bipy)](4) (PPh(3) = triphenylphosphine, dcype = 1,2-bis(dicyclohexylphosphino)ethane, bipy = 2,2'-bipyridine, Me(2)bipy = 4,4'-dimethyl-2,2'-bipyridine) were used as precursors to synthesize the associated vinylidene complexes. The complexes [RuCl(=C=CHPh)(PPh(3))(2)(bipy)]PF(6) (5), [RuCl(=C=CHPh)(PPh(3))(2)(Me(2)bipy)]PF(6) (6), [RuCl(=C=CHPh)(dcype)(bipy)]PF(6) (7), [RuCl(=C=CHPh)(dcype)(bipy)]PF(6) (8) were characterized and their spectral, electrochemical, photochemical and photophysical properties were examined. The emission assigned to the pi-pi* excited state from the vinylidene ligand is irradiation wavelength (340, 400, 430 nm) and solvent (CH(2)Cl(2), CH(3)CN, EtOH/MeOH) dependent. The cyclic voltammograms of (6) and (7) show a reversible metal oxidation peak and two successive ligand reductions in the +1.5-(-0.64) V range. The reduction of the vinylidene leads to the formation of the acetylide complex, but due the hydrogen abstraction the process is irreversible. The studies described here suggest that for practical applications such as functional materials, nonlinear optics, building blocks and supramolecular photochemistry. (C) 2011 Elsevier B.V. All rights reserved.
Resumo:
This work describes the ultrasound-assisted synthesis of saturated aliphatic esters from synthetic aliphatic acids and either methanol or ethanol. The products were isolated in good yields after short reaction times under mild conditions. (C) 2011 Elsevier BM. All rights reserved.
Resumo:
A sensitive and fast-responding membrane-free amperometric gas sensor is described, consisting of a small filter paper foil soaked with a room temperature ionic liquid (RTIL), upon which three electrodes are screen printed with carbon ink, using a suitable mask. It takes advantage of the high electrical conductivity and negligible vapour pressure of RTILs as well as their easy immobilization into a porous and inexpensive supporting material such as paper. Moreover, thanks to a careful control of the preparation procedure, a very close contact between the RTIL and electrode material can be achieved so as to allow gaseous analytes to undergo charge transfer just as soon as they reach the three-phase sites where the electrode material, paper supported RTIL and gas phase meet. Thus, the adverse effect on recorded currents of slow steps such as analyte diffusion and dissolution in a solvent is avoided. To evaluate the performance of this device, it was used as a wall-jet amperometric detector for flow injection analysis of 1-butanethiol vapours, adopted as the model gaseous analyte, present in headspace samples in equilibrium with aqueous solutions at controlled concentrations. With this purpose, the RTIL soaked paper electrochemical detector (RTIL-PED) was assembled by using 1-butyl-3-methylimidazolium bis(trifluoromethanesulfonyl) imide as the wicking RTIL and printing the working electrode with carbon ink doped with cobalt(II) phthalocyanine, to profit from its ability to electrocatalyze thiol oxidation. The results obtained were quite satisfactory (detection limit: 0.5 mu M; dynamic range: 2-200 mu M, both referring to solution concentrations; correlation coefficient: 0.998; repeatability: +/- 7% RSD; long-term stability: 9%), thus suggesting the possible use of this device for manifold applications.
Resumo:
Despite the severity of pneumonia in patients with pandemic influenza A infection (H1N1), no validated risk scores associated with H1N1 pneumonia were tested. In this prospective observational study, we analyzed data of consecutive patients in our emergency room, hospitalized because of pneumonia between July and August 2009 in a public hospital in Brazil. The following pneumonia scoring systems were applied: the SMART-COP rule; the Pneumonia Severity Index; and the CURB-65 rule. Of 105 patients with pneumonia, 53 had H1N1 infection. Among them, only 9.5% that had a low risk according to SMART-COP were admitted to ICU, compared with 36.8% of those with the Pneumonia Severity Index score of 1-2 and 49% of those with CURB-65 score of 0-1. The SMART-COP had an accuracy of 83% to predict ICU admission. The SMART-COP rule presented the best performance to indicate ICU admission in patients with H1N1 pneumonia. European Journal of Emergency Medicine 19: 200-202 (C) 2012 Wolters Kluwer Health vertical bar Lippincott Williams & Wilkins.
Resumo:
OBJECTIVE: Urinary lithiasis is a common disease. The aim of the present study is to assess the knowledge regarding the diagnosis, treatment and recommendations given to patients with ureteral colic by professionals of an academic hospital. MATERIALS AND METHODS: Sixty-five physicians were interviewed about previous experience with guidelines regarding ureteral colic and how they manage patients with ureteral colic in regards to diagnosis, treatment and the information provided to the patients. RESULTS: Thirty-six percent of the interviewed physicians were surgeons, and 64% were clinicians. Forty-one percent of the physicians reported experience with ureterolithiasis guidelines. Seventy-two percent indicated that they use noncontrast CT scans for the diagnosis of lithiasis. All of the respondents prescribe hydration, primarily for the improvement of stone elimination (39.3%). The average number of drugs used was 3.5. The combination of nonsteroidal anti-inflammatory drugs and opioids was reported by 54% of the physicians (i. e., 59% of surgeons and 25.6% of clinicians used this combination of drugs) (p = 0.014). Only 21.3% prescribe alpha blockers. CONCLUSION: Reported experience with guidelines had little impact on several habitual practices. For example, only 21.3% of the respondents indicated that they prescribed alpha blockers; however, alpha blockers may increase stone elimination by up to 54%. Furthermore, although a meta-analysis demonstrated that hydration had no effect on the transit time of the stone or on the pain, the majority of the physicians reported that they prescribed more than 500 ml of fluid. Dipyrone, hyoscine, nonsteroidal anti-inflammatory drugs, and opioids were identified as the most frequently prescribed drug combination. The information regarding the time for the passage of urinary stones was inconsistent. The development of continuing education programs regarding ureteral colic in the emergency room is necessary.
Resumo:
Objective: The aim of this study was to evaluate, ex vivo, the precision of five electronic root canal length measurement devices (ERCLMDs) with different operating systems: the Root ZX, Mini Apex Locator, Propex II, iPex, and RomiApex A-15, and the possible influence of the positioning of the instrument tips short of the apical foramen. Material and Methods: Forty-two mandibular bicuspids had their real canal lengths (RL) previously determined. Electronic measurements were performed 1.0 mm short of the apical foramen (-1.0), followed by measurements at the apical foramen (0.0). The data resulting from the comparison of the ERCLMD measurements and the RL were evaluated by the Wilcoxon and Friedman tests at a significance level of 5%. Results: Considering the measurements performed at 0.0 and -1.0, the precision rates for the ERCLMDs were: 73.5% and 47.1% (Root ZX), 73.5% and 55.9% (Mini Apex Locator), 67.6% and 41.1% (Propex II), 61.7% and 44.1% (iPex), and 79.4% and 44.1% (RomiApex A-15), respectively, considering ±0.5 mm of tolerance. Regarding the mean discrepancies, no differences were observed at 0.0; however, in the measurements at -1.0, the iPex, a multi-frequency ERCLMD, had significantly more discrepant readings short of the apical foramen than the other devices, except for the Propex II, which had intermediate results. When the ERCLMDs measurements at -1.0 were compared with those at 0.0, the Propex II, iPex and RomiApex A-15 presented significantly higher discrepancies in their readings. Conclusions: Under the conditions of the present study, all the ERCLMDs provided acceptable measurements at the 0.0 position. However, at the -1.0 position, the ERCLMDs had a lower precision, with statistically significant differences for the Propex II, iPex, and RomiApex A-15.
Resumo:
This article analyzes the study of the relationship among knowledge management, the company's market orientation, innovativeness and organizational outcomes. The survey was conducted based on a survey held with executives from 241 companies in Brazil. The evidence found indicates that knowledge management directly contributes to market orientation, but it requires a clearly defined strategic direction to achieve results and innovativeness. It was also concluded that knowledge, as a resource, leverages other resources of the company, while it requires a direction in relation to the organizational goals in order to be effective.
Resumo:
OBJECTIVE: To evaluate the impact of the routine use of rapid antigen detection test in the diagnosis and treatment of acute pharyngotonsillitis in children. METHODS: This is a prospective and observational study, with a protocol compliance design established at the Emergency Unit of the University Hospital of Universidade de São Paulo for the care of children and adolescents diagnosed with acute pharyngitis. RESULTS: 650 children and adolescents were enrolled. Based on clinical findings, antibiotics would be prescribed for 389 patients (59.8%); using the rapid antigen detection test, they were prescribed for 286 patients (44.0%). Among the 261 children who would not have received antibiotics based on the clinical evaluation, 111 (42.5%) had positive rapid antigen detection test. The diagnosis based only on clinical evaluation showed 61.1% sensitivity, 47.7% specificity, 44.9% positive predictive value, and 57.5% negative predictive value. CONCLUSIONS: The clinical diagnosis of streptococcal pharyngotonsillitis had low sensitivity and specificity. The routine use of rapid antigen detection test led to the reduction of antibiotic use and the identification of a risk group for complications of streptococcal infection, since 42.5% positive rapid antigen detection test patients would not have received antibiotics based only on clinical diagnosis.
Resumo:
Recently in most of the industrial automation process an ever increasing degree of automation has been observed. This increasing is motivated by the higher requirement of systems with great performance in terms of quality of products/services generated, productivity, efficiency and low costs in the design, realization and maintenance. This trend in the growth of complex automation systems is rapidly spreading over automated manufacturing systems (AMS), where the integration of the mechanical and electronic technology, typical of the Mechatronics, is merging with other technologies such as Informatics and the communication networks. An AMS is a very complex system that can be thought constituted by a set of flexible working stations, one or more transportation systems. To understand how this machine are important in our society let considerate that every day most of us use bottles of water or soda, buy product in box like food or cigarets and so on. Another important consideration from its complexity derive from the fact that the the consortium of machine producers has estimated around 350 types of manufacturing machine. A large number of manufacturing machine industry are presented in Italy and notably packaging machine industry,in particular a great concentration of this kind of industry is located in Bologna area; for this reason the Bologna area is called “packaging valley”. Usually, the various parts of the AMS interact among them in a concurrent and asynchronous way, and coordinate the parts of the machine to obtain a desiderated overall behaviour is an hard task. Often, this is the case in large scale systems, organized in a modular and distributed manner. Even if the success of a modern AMS from a functional and behavioural point of view is still to attribute to the design choices operated in the definition of the mechanical structure and electrical electronic architecture, the system that governs the control of the plant is becoming crucial, because of the large number of duties associated to it. Apart from the activity inherent to the automation of themachine cycles, the supervisory system is called to perform other main functions such as: emulating the behaviour of traditional mechanical members thus allowing a drastic constructive simplification of the machine and a crucial functional flexibility; dynamically adapting the control strategies according to the different productive needs and to the different operational scenarios; obtaining a high quality of the final product through the verification of the correctness of the processing; addressing the operator devoted to themachine to promptly and carefully take the actions devoted to establish or restore the optimal operating conditions; managing in real time information on diagnostics, as a support of the maintenance operations of the machine. The kind of facilities that designers can directly find on themarket, in terms of software component libraries provides in fact an adequate support as regard the implementation of either top-level or bottom-level functionalities, typically pertaining to the domains of user-friendly HMIs, closed-loop regulation and motion control, fieldbus-based interconnection of remote smart devices. What is still lacking is a reference framework comprising a comprehensive set of highly reusable logic control components that, focussing on the cross-cutting functionalities characterizing the automation domain, may help the designers in the process of modelling and structuring their applications according to the specific needs. Historically, the design and verification process for complex automated industrial systems is performed in empirical way, without a clear distinction between functional and technological-implementation concepts and without a systematic method to organically deal with the complete system. Traditionally, in the field of analog and digital control design and verification through formal and simulation tools have been adopted since a long time ago, at least for multivariable and/or nonlinear controllers for complex time-driven dynamics as in the fields of vehicles, aircrafts, robots, electric drives and complex power electronics equipments. Moving to the field of logic control, typical for industrial manufacturing automation, the design and verification process is approached in a completely different way, usually very “unstructured”. No clear distinction between functions and implementations, between functional architectures and technological architectures and platforms is considered. Probably this difference is due to the different “dynamical framework”of logic control with respect to analog/digital control. As a matter of facts, in logic control discrete-events dynamics replace time-driven dynamics; hence most of the formal and mathematical tools of analog/digital control cannot be directly migrated to logic control to enlighten the distinction between functions and implementations. In addition, in the common view of application technicians, logic control design is strictly connected to the adopted implementation technology (relays in the past, software nowadays), leading again to a deep confusion among functional view and technological view. In Industrial automation software engineering, concepts as modularity, encapsulation, composability and reusability are strongly emphasized and profitably realized in the so-calledobject-oriented methodologies. Industrial automation is receiving lately this approach, as testified by some IEC standards IEC 611313, IEC 61499 which have been considered in commercial products only recently. On the other hand, in the scientific and technical literature many contributions have been already proposed to establish a suitable modelling framework for industrial automation. During last years it was possible to note a considerable growth in the exploitation of innovative concepts and technologies from ICT world in industrial automation systems. For what concerns the logic control design, Model Based Design (MBD) is being imported in industrial automation from software engineering field. Another key-point in industrial automated systems is the growth of requirements in terms of availability, reliability and safety for technological systems. In other words, the control system should not only deal with the nominal behaviour, but should also deal with other important duties, such as diagnosis and faults isolations, recovery and safety management. Indeed, together with high performance, in complex systems fault occurrences increase. This is a consequence of the fact that, as it typically occurs in reliable mechatronic systems, in complex systems such as AMS, together with reliable mechanical elements, an increasing number of electronic devices are also present, that are more vulnerable by their own nature. The diagnosis problem and the faults isolation in a generic dynamical system consists in the design of an elaboration unit that, appropriately processing the inputs and outputs of the dynamical system, is also capable of detecting incipient faults on the plant devices, reconfiguring the control system so as to guarantee satisfactory performance. The designer should be able to formally verify the product, certifying that, in its final implementation, it will perform itsrequired function guarantying the desired level of reliability and safety; the next step is that of preventing faults and eventually reconfiguring the control system so that faults are tolerated. On this topic an important improvement to formal verification of logic control, fault diagnosis and fault tolerant control results derive from Discrete Event Systems theory. The aimof this work is to define a design pattern and a control architecture to help the designer of control logic in industrial automated systems. The work starts with a brief discussion on main characteristics and description of industrial automated systems on Chapter 1. In Chapter 2 a survey on the state of the software engineering paradigm applied to industrial automation is discussed. Chapter 3 presentes a architecture for industrial automated systems based on the new concept of Generalized Actuator showing its benefits, while in Chapter 4 this architecture is refined using a novel entity, the Generalized Device in order to have a better reusability and modularity of the control logic. In Chapter 5 a new approach will be present based on Discrete Event Systems for the problemof software formal verification and an active fault tolerant control architecture using online diagnostic. Finally conclusive remarks and some ideas on new directions to explore are given. In Appendix A are briefly reported some concepts and results about Discrete Event Systems which should help the reader in understanding some crucial points in chapter 5; while in Appendix B an overview on the experimental testbed of the Laboratory of Automation of University of Bologna, is reported to validated the approach presented in chapter 3, chapter 4 and chapter 5. In Appendix C some components model used in chapter 5 for formal verification are reported.
Resumo:
Die DNA-Doppelhelix ist eine relativ dicke (Ø ≈ 2 nm), kompakte und dadurch auf kurzen Längenskalen relativ steife Verbindung (lp[dsDNA] ≈ 50-60 nm), mit einer klar definierten Struktur, die durch biologische Methoden sehr präzise manipuliert werden kann. Die Auswirkungen der primären Sequenz auf die dreidimensionale Strukturbildung ist gut verstanden und exakt vorhersagbar. Des Weiteren kann DNA an verschiedenen Stellen mit anderen Molekülen verknüpft werden, ohne dass ihre Selbsterkennung gestört wird. Durch die helikale Struktur besteht außerdem ein Zusammenhang zwischen der Lage und der räumlichen Orientierung von eingeführten Modifikationen. Durch moderne Syntheseverfahren lassen sich beliebige Oligonukleotidsequenzen im Bereich bis etwa 150-200 Basen relativ preiswert im Milligrammmaßstab herstellen. Diese Eigenschaften machen die DNA zu einem idealen Kandidaten zur Erzeugung komplexer Strukturen, die durch Selbsterkennung der entsprechenden Sequenzen gebildet werden. In der hier vorgelegten Arbeit wurden einzelsträngige DNA-Abschnitte (ssDNA) als adressierbare Verknüpfungsstellen eingesetzt, um verschiedene molekulare Bausteine zu diskreten nicht periodischen Strukturen zu verbinden. Als Bausteine dienten flexible synthetische Polymerblöcke und semiflexible Doppelstrang-DNA-Abschnitte (dsDNA), die an beiden Enden mit unterschiedlichen Oligonukleotidsequenzen „funktionalisiert“ sind. Die zur Verknüpfung genutzten Oligonukleotidabschnitte wurden so gewählt (n > 20 Basen), dass ihre Hybridisierung zu einer bei Raumtemperatur stabilen Doppelstrangbildung führt. Durch Kombination der Phosphoramiditsynthese von DNA mit einer festkörpergestützten Blockkopplungsreaktion konnte am Beispiel von Polyethylenoxiden ein sehr effektiver Syntheseweg zur Herstellung von ssDNA1-PEO-ssDNA2-Triblockcopolymeren entwickelt werden, der sich problemlos auf andere Polymere übertragen lassen sollte. Die Längen und Basenabfolgen der beiden Oligonukleotidsequenzen können dabei unabhängig voneinander frei gewählt werden. Somit wurden die Voraussetzungen geschaffen, um die Selbsterkennung von Oligonukleotiden durch Kombination verschiedener Triblockcopolymere zur Erzeugung von Multiblockcopolymeren zu nutzen, die mit klassischen Synthesetechniken nicht zugänglich sind. Semiflexible Strukturelemente lassen sich durch die Synthese von Doppelstrangfragmenten mit langen überstehenden Enden (sticky-ends) realisieren. Die klassischen Ansätze der molekularen Genetik zur Erzeugung von sticky-ends sind in diesem Fall nicht praktikabel, da sie zu Einschränkungen im Bezug auf Länge und Sequenz der überhängenden Enden führen. Als Methode der Wahl haben sich zwei verschiedene Varianten der Polymerase Kettenreaktion (PCR) erwiesen, die auf der Verwendung von teilkomplementären Primern beruhen. Die eigentlichen Primersequenzen wurden am 5´-Ende entweder über ein 2´-Desoxyuridin oder über einen kurzen Polyethylenoxid-Spacer (n = 6) mit einer frei wählbaren „sticky-end-Sequenz“ verknüpft. Mit diesen Methoden sind sowohl 3´- als auch 5´-Überhänge zugänglich und die Länge der Doppelstrangabschnitte kann über einen breiten Molmassenbereich sehr exakt eingestellt werden. Durch Kombination derartiger Doppelstrangfragmente mit den biosynthetischen Triblockcopolymeren lassen sich Strukturen erzeugen, die als Modellsysteme zur Untersuchung verschiedener Biomoleküle genutzt werden können, die in Form eines mehrfach gebrochenen Stäbchens vorliegen. Im letzten Abschnitt wurde gezeigt, dass durch geeignete Wahl der überstehenden Enden bzw. durch Hybridisierung der Doppelstrangfragmente mit passenden Oligonukleotiden verzweigte DNA-Strukturen mit Armlängen von einigen hundert Nanometern zugänglich sind. Im Vergleich zu den bisher veröffentlichten Methoden bietet diese Herangehensweise zwei entscheidende Vorteile: Zum einen konnte der Syntheseaufwand auf ein Minimum reduziert werden, zum anderen ist es auf diesem Weg möglich die Längen der einzelnen Arme, unabhängig voneinander, über einen breiten Molmassenbereich zu variieren.
Resumo:
The Schroeder's backward integration method is the most used method to extract the decay curve of an acoustic impulse response and to calculate the reverberation time from this curve. In the literature the limits and the possible improvements of this method are widely discussed. In this work a new method is proposed for the evaluation of the energy decay curve. The new method has been implemented in a Matlab toolbox. Its performance has been tested versus the most accredited literature method. The values of EDT and reverberation time extracted from the energy decay curves calculated with both methods have been compared in terms of the values themselves and in terms of their statistical representativeness. The main case study consists of nine Italian historical theatres in which acoustical measurements were performed. The comparison of the two extraction methods has also been applied to a critical case, i.e. the structural impulse responses of some building elements. The comparison underlines that both methods return a comparable value of the T30. Decreasing the range of evaluation, they reveal increasing differences; in particular, the main differences are in the first part of the decay, where the EDT is evaluated. This is a consequence of the fact that the new method returns a “locally" defined energy decay curve, whereas the Schroeder's method accumulates energy from the tail to the beginning of the impulse response. Another characteristic of the new method for the energy decay extraction curve is its independence on the background noise estimation. Finally, a statistical analysis is performed on the T30 and EDT values calculated from the impulse responses measurements in the Italian historical theatres. The aim of this evaluation is to know whether a subset of measurements could be considered representative for a complete characterization of these opera houses.
Resumo:
The development of High-Integrity Real-Time Systems has a high footprint in terms of human, material and schedule costs. Factoring functional, reusable logic in the application favors incremental development and contains costs. Yet, achieving incrementality in the timing behavior is a much harder problem. Complex features at all levels of the execution stack, aimed to boost average-case performance, exhibit timing behavior highly dependent on execution history, which wrecks time composability and incrementaility with it. Our goal here is to restitute time composability to the execution stack, working bottom up across it. We first characterize time composability without making assumptions on the system architecture or the software deployment to it. Later, we focus on the role played by the real-time operating system in our pursuit. Initially we consider single-core processors and, becoming less permissive on the admissible hardware features, we devise solutions that restore a convincing degree of time composability. To show what can be done for real, we developed TiCOS, an ARINC-compliant kernel, and re-designed ORK+, a kernel for Ada Ravenscar runtimes. In that work, we added support for limited-preemption to ORK+, an absolute premiere in the landscape of real-word kernels. Our implementation allows resource sharing to co-exist with limited-preemptive scheduling, which extends state of the art. We then turn our attention to multicore architectures, first considering partitioned systems, for which we achieve results close to those obtained for single-core processors. Subsequently, we shy away from the over-provision of those systems and consider less restrictive uses of homogeneous multiprocessors, where the scheduling algorithm is key to high schedulable utilization. To that end we single out RUN, a promising baseline, and extend it to SPRINT, which supports sporadic task sets, hence matches real-world industrial needs better. To corroborate our results we present findings from real-world case studies from avionic industry.
Resumo:
The present thesis is focused on the study of Organic Semiconducting Single Crystals (OSSCs) and crystalline thin films. In particular solution-grown OSSC, e.g. 4-hdroxycyanobenzene (4HCB) have been characterized in view of their applications as novel sensors of X-rays, gamma-rays, alpha particles radiations and chemical sensors. In the field of ionizing radiation detection, organic semiconductors have been proposed so far mainly as indirect detectors, i.e. as scintillators or as photodiodes. I first study the performance of 4HCB single crystals as direct X-ray detector i.e. the direct photon conversion into an electrical signal, assessing that they can operate at room temperature and in atmosphere, showing a stable and linear response with increasing dose rate. A dedicated study of the collecting electrodes geometry, crystal thickness and interaction volume allowed us to maximize the charge collection efficiency and sensitivity, thus assessing how OSSCs perform at low operating voltages and offer a great potential in the development of novel ionizing radiation sensors. To better understand the processes generating the observed X-ray signal, a comparative study is presented on OSSCs based on several small-molecules: 1,5-dinitronaphthalene (DNN), 1,8-naphthaleneimide (NTI), Rubrene and TIPS-pentacene. In addition, the proof of principle of gamma-rays and alpha particles has been assessed for 4HCB single crystals. I have also carried out an investigation of the electrical response of OSSCs exposed to vapour of volatile molecules, polar and non-polar. The last chapter deals with rubrene, the highest performing molecular crystals for electronic applications. We present an investigation on high quality, millimeter-sized, crystalline thin films (10 – 100 nm thick) realized by exploiting organic molecular beam epitaxy on water-soluble substrates. Space-Charge-Limited Current (SCLC) and photocurrent spectroscopy measurements have been carried out. A thin film transistor was fabricated onto a Cytop® dielectric layer. The FET mobility exceeding 2 cm2/Vs, definitely assess the quality of RUB films.
Resumo:
Il presente lavoro tratta la progettazione e caratterizzazione di una nuova "listening room" ad acustica controllata partendo dai requisiti dettati dalle norme tecniche ITU-R BS 1116-1 e EBU/UER Tech. doc. 3276. Ad oggi è presente un'ampia letteratura, che tratta approcci per valutazione acustica delle sale di ascolto. Essa inizialmente era volta a trovare proporzioni ideali tra le dimensioni della camera, poi la ricerca si è spostata sull'elaborazione di modelli previsionali. Purtroppo tali metodi spesso non riescono a garantire le prestazioni desiderate, mentre le prove sperimentali dettate dalle norme risultano essere di comprovata validità. L'ambiente oggetto di studio è stato progettato all'interno dello spazio dei laboratori CIRI. La tecnologia costruttiva è frutto di uno studio approfondito, in particolare la scelta di fibre di poliestere termolegate, per il rivestimento delle pareti interne, è stata valutata attraverso misure in camera riverberante secondo UNI-EN-ISO 354. Si è poi seguita una metodologia iterativa che coinvolgesse messa in opera, misurazioni in situ e valutazione tramite confronto con i parametri consigliati dalle normative tecniche sopra citate. In quest'ottica sono state effettuate acquisizioni di risposte all'impulso monoaurali e dei livelli di pressione sonora per verificare la qualità di isolamento e il comportamento alle basse frequenze. La validazione restituisce indicazioni positive per gli utilizzi dell'ambiente ipotizzati e risulta compatibile con le stringenti richieste delle norme tecniche.