989 resultados para Daylight saving time.
Resumo:
Cette thèse étudie la représentation de la machine chez Robida. La partie centrale de notre recherche s’intéresse à révéler ses significations et interroge sa mise en scène littéraire et visuelle dans chacun des romans de la trilogie d’anticipation scientifique la plus connue de l’auteur-illustrateur. La quête se transforme en un voyage continu entre le lisible et le visible, le dit et le non-dit, la description littéraire et l’imagination, la réalité et la fiction. Nous nous intéressons à l’évolution de la vision de Robida : dans Le Vingtième siècle, l’image de la machine bienfaisante, facilitant la vie de l’homme, économisant du temps et de l’argent, et contribuant largement à son bonheur et à son divertissement, à part quelques accidents très limités, se traduit par une complémentarité avantageuse entre le texte d’une part et les vignettes, les tableaux et les hors-textes se trouvant dans le récit, d’autre part. Celle-ci se transforme, dans La Guerre au vingtième siècle, en une inquiétude vis-à-vis de l’instrumentalisation de la machine pour la guerre, qui s’exprime par une projection de la narration vers l’illustration in-texte, et sensibilise le lecteur en montrant le caractère violent et offensif d’appareils uniquement nommés. Celle-ci devient finalement, dans La Vie électrique, synonyme d’un pessimisme total quant à l’implication de la machine dans la société et à la puissance du savoir scientifique dans l’avenir, qui s’affiche dans des hors-textes sombres et maussades. Dans ce cadre, la machine illustrée exige une lecture iconotextuelle, une importance accordée au détail, aux éléments présents ou absents, aux modalités de passage d’un mode de présentation à l’autre, à la place anticipée ou tardive de l’illustration, au rapport entre le texte, le dessin et sa légende, aux mots qui migrent vers le dessin et surtout au reste du décor incomplet. Chez Robida, les louanges qui passent à la critique et l’humour qui se fait cynisme, sont assez représentatifs des espoirs et des craintes suscités par la découverte et la mise en application de l’électricité, par ses vertus, mais aussi par son aspect incontrôlable.
Resumo:
Background Appropriately conducted adaptive designs (ADs) offer many potential advantages over conventional trials. They make better use of accruing data, potentially saving time, trial participants, and limited resources compared to conventional, fixed sample size designs. However, one can argue that ADs are not implemented as often as they should be, particularly in publicly funded confirmatory trials. This study explored barriers, concerns, and potential facilitators to the appropriate use of ADs in confirmatory trials among key stakeholders. Methods We conducted three cross-sectional, online parallel surveys between November 2014 and January 2015. The surveys were based upon findings drawn from in-depth interviews of key research stakeholders, predominantly in the UK, and targeted Clinical Trials Units (CTUs), public funders, and private sector organisations. Response rates were as follows: 30(55 %) UK CTUs, 17(68 %) private sector, and 86(41 %) public funders. A Rating Scale Model was used to rank barriers and concerns in order of perceived importance for prioritisation. Results Top-ranked barriers included the lack of bridge funding accessible to UK CTUs to support the design of ADs, limited practical implementation knowledge, preference for traditional mainstream designs, difficulties in marketing ADs to key stakeholders, time constraints to support ADs relative to competing priorities, lack of applied training, and insufficient access to case studies of undertaken ADs to facilitate practical learning and successful implementation. Associated practical complexities and inadequate data management infrastructure to support ADs were reported as more pronounced in the private sector. For funders of public research, the inadequate description of the rationale, scope, and decision-making criteria to guide the planned AD in grant proposals by researchers were all viewed as major obstacles. Conclusions There are still persistent and important perceptions of individual and organisational obstacles hampering the use of ADs in confirmatory trials research. Stakeholder perceptions about barriers are largely consistent across sectors, with a few exceptions that reflect differences in organisations’ funding structures, experiences and characterisation of study interventions. Most barriers appear connected to a lack of practical implementation knowledge and applied training, and limited access to case studies to facilitate practical learning. Keywords: Adaptive designs; flexible designs; barriers; surveys; confirmatory trials; Phase 3; clinical trials; early stopping; interim analyses
Resumo:
Objetivou-se com o presente trabalho, estabelecer a relação entre os pigmentos fotossintéticos extraídos em DMSO e as leituras obtidas no clorofilômetro portátil ClorofiLOG® 1030, gerando modelos matemáticos capazes de predizer os teores de clorofila e de carotenóides em folhas de mamoneira. O trabalho foi conduzido na Empresa Brasileira de Pesquisa Agropecuária (EMBRAPA) Algodão, situada em Campina Grande, Estado da Paraíba, em outubro de 2010. Para a análise indireta, foi utilizado um equipamento portátil, sendo realizada a leitura em discos foliares com diferentes tonalidades de verde, sendo feita, nesses mesmos discos, a determinação da clorofila pelo método clássico. Para a extração da clorofila, utilizaram-se 5 mL de dimetilsulfóxido (DMSO), a qual foi mantida em banho-maria a 70ºC, por 30 minutos, e retirou-se 3 mL da alíquota para leitura em espectrofotômetro nos comprimentos de onda de 470, 646 e 663 nm. Os dados foram submetidos à análise da variância e regressão polinomial. A leitura obtida no clorofilômetro portátil foi a variável dependente, e os pigmentos fotossintéticos determinados pelo método clássico foi a variável independente. Os resultados indicaram que o clorofilômetro portátil ClorofiLOG® 1030, associado a modelos matemáticos, permitiu estimar a concentração dos pigmentos fotossintéticos, exceto a clorofila b, com alta precisão, com economia de tempo e com reagentes normalmente utilizados nos procedimentos convencionais.
Resumo:
Statement of problem. According to manufacturers, bonding with self-adhesive resin cements can be achieved without any pretreatment steps such as etching, priming, or bonding. However, the benefit of saving time with these simplified luting systems may be realized at the expense of compromising the bonding capacity.Purpose. The purpose of this study was to assess whether different dentin conditioning protocols influence the bond performance of self-adhesive resin cements to dentin.Material and methods. Flat dentin surfaces from 48 human molars were divided into 4 groups (n=12): 1) control, no conditioning; 2) H(3)PO(4), etching with 37% H(3)PO(4) for 15 seconds; 3) SEBond, bonding with self-etching primer adhesive (Clearfil SE Bond); and 4) EDTA, etching with 0.1M EDTA for 60 seconds. The specimens from each dentin pre-treatment were bonded using the self-adhesive cements RelyX Unicem, Maxcem or Multilink Sprint (n=4). The resin-cement-dentin specimens were stored in water at 37 degrees C for 7 days, and serially sectioned to produce beam specimens of 1.0 mm(2) cross-sectional area. Microtensile bond strength (mu TBS) testing was performed at 1.0 mm/min. Data (MPa) were analyzed by 2-way ANOVA and Tukey multiple comparisons test (alpha=.05). Fractured specimens were examined with a stereomicroscope (x40) and classified as adhesive, mixed, or cohesive. Additional bonded interfaces were evaluated under a scanning electron microscope (SEM).Results. Cement-dentin mu TBS was affected by the dentin conditioning approach (P <.001). RelyX Unicem attained statistically similar bond strengths to all pre-treated dentin surfaces. H(3)PO(4)-etching prior to the application of Maxcem resulted in bond strength values that were significantly higher than the other groups. The lowest mu TBS were attained when luting Multilink Sprint per manufacturers' recommendations, while H(3)PO(4)-etching produced the highest values followed by Clearfil SE bonding and EDTA. SEM observations disclosed an enhanced potential of the self-adhesive cements to form a hybrid layer when applied following manufacturer's instructions.Conclusions. When evaluated self-adhesive resin cements are used, selectively etching dentin with H(3)PO(4) prior to luting results in the most effective bonding. (J Prosthet Dent 2011;105:227-235)
Resumo:
Automatic inspection of petroleum well drilling has became paramount in the last years, mainly because of the crucial importance of saving time and operations during the drilling process in order to avoid some problems, such as the collapse of the well borehole walls. In this paper, we extended another work by proposing a fast petroleum well drilling monitoring through a modified version of the Optimum-Path Forest classifier. Given that the cutting's volume at the vibrating shale shaker can provide several information about drilling, we used computer vision techniques to extract texture informations from cutting images acquired by a digital camera. A collection of supervised classifiers were applied in order to allow comparisons about their accuracy and effciency. We used the Optimum-Path Forest (OPF), EOPF (Efficient OPF), Artificial Neural Network using Multilayer Perceptrons (ANN-MLP) Support Vector Machines (SVM), and a Bayesian Classifier (BC) to assess the robustness of our proposed schema for petroleum well drilling monitoring through cutting image analysis.
Resumo:
The design and production process of the workshop shed made of bamboo presented in this paper is the result of almost two years of cooperation between Viverde Farmers Association, Universidade Estadual Paulista and Unisol. The system consists of rounded aroeira pillars, a roof structure formed by in natura bamboo frames and metal connections. The development of the parts and connections detailing, through the technical drawing of each component, enabled the development of standardized precision fittings aiming at optimizing the execution and saving time, energy and raw material on the assembly line. In this process, the creation and improvement of templates was necessary in order to facilitate the assembly of parts. The component assembly phases were recorded at the Experimental Laboratory for Bamboo and Wood Processing - UNESP/FEB. The results showed the good performance of the pre-fabrication process of bamboo components.
Resumo:
The aim of this work was to generate mathematical models capable of identifying photosynthetic pigments and soluble proteins from the leaves of Jatropha curcas using the relationship between classical readings performed by spectrophotometry and the chlorophyll meter, ClorofiLOG ® 1030. The work was conducted at Embrapa Cotton, in the city of Campina Grande, state of Paraíba, Brazil. For indirect analysis, portable equipment was used to read leaf discs at different stages of development. The chlorophyll in these discs was then determined using a classical method, while the Bradford method was used to determine soluble proteins. The data were subjected to analysis of variance and regression analyses, in which the readings obtained using the portable chlorophyll meter were the dependent variables and the photosynthetic pigments and soluble protein determined by the classical method the independents variables. The results indicated that with the exception of chlorophyll b and soluble protein, the mathematical models obtained with the portable chlorophyll ClorofiLOG ® 1030 can be used to estimate the concentration of photosynthetic pigments with high precision, thus saving time and the chemical reagents required for conventional procedures.
Resumo:
Mitochondrial DNA (mtDNA) analysis is usually a last resort in routine forensic DNA casework. However, it has become a powerful tool for the analysis of highly degraded samples or samples containing too little or no nuclear DNA, such as old bones and hair shafts. The gold standard methodology still constitutes the direct sequencing of polymerase chain reaction (PCR) products or cloned amplicons from the HVS-1 and HVS-2 (hypervariable segment) control region segments. Identifications using mtDNA are time consuming, expensive and can be very complex, depending on the amount and nature of the material being tested. The main goal of this work is to develop a less labour-intensive and less expensive screening method for mtDNA analysis, in order to aid in the exclusion of non-matching samples and as a presumptive test prior to final confirmatory DNA sequencing. We have selected 14 highly discriminatory single nucleotide polymorphisms (SNPs) based on simulations performed by Salas and Amigo (2010) [1] to be typed using SNaPShotTM (Applied Biosystems, Foster City, CA, USA). The assay was validated by typing more than 100 HVS-1/HVS-2 sequenced samples. No differences were observed between the SNP typing and DNA sequencing when results were compared, with the exception of allelic dropouts observed in a few haplotypes. Haplotype diversity simulations were performed using 172 mtDNA sequences representative of the Brazilian population and a score of 0.9794 was obtained when the 14 SNPs were used, showing that the theoretical prediction approach for the selection of highly discriminatory SNPs suggested by Salas and Amigo (2010) [1] was confirmed in the population studied. As the main goal of the work is to develop a screening assay to skip the sequencing of all samples in a particular case, a pair-wise comparison of the sequences was done using the selected SNPs. When both HVS-1/HVS-2 SNPs were used for simulations, at least two differences were observed in 93.2% of the comparisons performed. The assay was validated with casework samples. Results show that the method is straightforward and can be used for exclusionary purposes, saving time and laboratory resources. The assay confirms the theoretic prediction suggested by Salas and Amigo (2010) [1]. All forensic advantages, such as high sensitivity and power of discrimination, as also the disadvantages, such as the occurrence of allele dropouts, are discussed throughout the article. © 2013 Elsevier B.V.
Resumo:
Background: The problem of diagnosing whether a solitary pulmonary nodule is benign or malignant is even greater in developing countries due to a higher prevalence of infectious diseases. These infections generate a large number of patients who are generally asymptomatic and with a pulmonary nodule that cannot be accurately defined as having benign or malignant etiology.Purpose: To verify the percentages of benign versus malignant non-calcified nodules, the length of time after contrast agent injection is spiral computed tomography (CT) most sensitive and specific, and whether three postcontrast phases are necessary.Material and Methods: We studied 23 patients with solitary pulmonary nodules identified on chest radiographs or CT. Spiral scans were obtained with Swensen protocol, but at 3, 4, and 5 min after contrast injection onset. Nodules were classified as benign or malignant by histopathological examination or by an absence or presence of growth after 2 years of follow-up CT.Results: Of the 23 patients studied, 18 (78.2%) showed a final diagnosis of benign and five (21.7%) malignant nodules. Despite the small sample size, we obtained results similar to those of Swensen et al., with 80.0% sensitivity, 55.5% specificity, and 60.8% accuracy. Four minutes gave the greatest mean enhancement in both malignant and benign lesions.Conclusion: Small non-calcified benign nodules were much more frequent than malignant nodules. The best time for dynamic contrast-enhanced CT density analysis was 4 min postcontrast. As well as saving time and money, this simplified Swensen protocol with only precontrast and 4 min postcontrast phases also reduces patient exposure to ionizing radiation.
Resumo:
The brake system of a Formula SAE car has determinant character in the quality of the project. Any flaw in the design of the brakes, the vehicle is rejected for the competition. The project well done, and its smooth operation, depends on some variables that should be studied and linked to the brake components, as needed by the vehicle. After the calculations, the components were defined according to commercial availability. So it is interesting simulation of braking when the vehicle will be submitted before the implementation of the brake system, saving time and cost. This project also enable the comparison between components from different brands. This work shows the study of a method that would allow simulate and test the brake system in an upcoming project for a bench test
Resumo:
Pós-graduação em Engenharia Mecânica - FEG
Resumo:
Identify opportunities for software parallelism is a task that takes a lot of human time, but once some code patterns for parallelism are identified, a software could quickly accomplish this task. Thus, automating this process brings many benefits such as saving time and reducing errors caused by the programmer [1]. This work aims at developing a software environment that identifies opportunities for parallelism in a source code written in C language, and generates a program with the same behavior, but with higher degree of parallelism, compatible with a graphics processor compatible with CUDA architecture.
Resumo:
The subject of this thesis is the development of a Gaschromatography (GC) system for non-methane hydrocarbons (NMHCs) and measurement of samples within the project CARIBIC (Civil Aircraft for the Regular Investigation of the atmosphere Based on an Instrument Container, www.caribic-atmospheric.com). Air samples collected at cruising altitude from the upper troposphere and lowermost stratosphere contain hydrocarbons at low levels (ppt range), which imposes substantial demands on detection limits. Full automation enabled to maintain constant conditions during the sample processing and analyses. Additionally, automation allows overnight operation thus saving time. A gas chromatography using flame ionization detection (FID) together with the dual column approach enables simultaneous detection with almost equal carbon atom response for all hydrocarbons except for ethyne. The first part of this thesis presents the technical descriptions of individual parts of the analytical system. Apart from the sample treatment and calibration procedures, the sample collector is described. The second part deals with analytical performance of the GC system by discussing tests that had been made. Finally, results for measurement flight are assessed in terms of quality of the data and two flights are discussed in detail. Analytical performance is characterized using detection limits for each compound, using uncertainties for each compound, using tests of calibration mixture conditioning and carbon dioxide trap to find out their influence on analyses, and finally by comparing the responses of calibrated substances during period when analyses of the flights were made. Comparison of both systems shows good agreement. However, because of insufficient capacity of the CO2 trap the signal of one column was suppressed due to breakthroughed carbon dioxide so much that its results appeared to be unreliable. Plausibility tests for the internal consistency of the given data sets are based on common patterns exhibited by tropospheric NMHCs. All tests show that samples from the first flights do not comply with the expected pattern. Additionally, detected alkene artefacts suggest potential problems with storing or contamination within all measurement flights. Two last flights # 130-133 and # 166-169 comply with the tests therefore their detailed analysis is made. Samples were analyzed in terms of their origin (troposphere vs. stratosphere, backward trajectories), their aging (NMHCs ratios) and detected plumes were compared to chemical signatures of Asian outflows. In the last chapter a future development of the presented system with focus on separation is drawn. An extensive appendix documents all important aspects of the dissertation from theoretical introduction through illustration of sample treatment to overview diagrams for the measured flights.
Resumo:
Intense research is being done in the field of organic photovoltaics in order to synthesize low band-gap organic molecules. These molecules are electron donors which feature in combination with acceptor molecules, typically fullerene derivarntives, forming an active blend. This active blend has phase separated bicontinuous morphology on a nanometer scale. The highest recorded power conversionrnefficiencies for such cells have been 10.6%. Organic semiconductors differ from inorganic ones due to the presence of tightly bonded excitons (electron-hole pairs)resulting from their low dielectric constant (εr ≈2-4). An additional driving force is required to separate such Frenkel excitons since their binding energy (0.3-1 eV) is too large to be dissociated by an electric field alone. This additional driving force arises from the energy difference between the lowest unoccupied molecular orbital (LUMO) of the donor and the acceptor materials. Moreover, the efficiency of the cells also depends on the difference between the highest occupied molecular orbital (HOMO) of the donor and LUMO of the acceptor. Therefore, a precise control and estimation of these energy levels are required. Furthermore any external influences that change the energy levels will cause a degradation of the power conversion efficiency of organic solar cell materials. In particular, the role of photo-induced degradation on the morphology and electrical performance is a major contribution to degradation and needs to be understood on a nanometer scale. Scanning Probe Microscopy (SPM) offers the resolution to image the nanometer scale bicontinuous morphology. In addition SPM can be operated to measure the local contact potential difference (CPD) of materials from which energy levels in the materials can be derived. Thus SPM is an unique method for the characterization of surface morphology, potential changes and conductivity changes under operating conditions. In the present work, I describe investigations of organic photovoltaic materials upon photo-oxidation which is one of the major causes of degradation of these solar cell materials. SPM, Nuclear Magnetic Resonance (NMR) and UV-Vis spectroscopy studies allowed me to identify the chemical reactions occurring inside the active layer upon photo-oxidation. From the measured data, it was possible to deduce the energy levels and explain the various shifts which gave a better understanding of the physics of the device. In addition, I was able to quantify the degradation by correlating the local changes in the CPD and conductivity to the device characteristics, i.e., open circuit voltage and short circuit current. Furthermore, time-resolved electrostatic force microscopy (tr-EFM) allowed us to probe dynamic processes like the charging rate of the individual donor and acceptor domains within the active blend. Upon photo-oxidation, it was observed, that the acceptor molecules got oxidized first preventing the donor polymer from degrading. Work functions of electrodes can be tailored by modifying the interface with monomolecular thin layers of molecules which are made by a chemical reaction in liquids. These modifications in the work function are particularly attractive for opto-electronic devices whose performance depends on the band alignment between the electrodes and the active material. In order to measure the shift in work function on a nanometer scale, I used KPFM in situ, which means in liquids, to follow changes in the work function of Au upon hexadecanethiol adsorption from decane. All the above investigations give us a better understanding of the photo-degradation processes of the active material at the nanoscale. Also, a method to compare various new materials used for organic solar cells for stability is proposed which eliminates the requirement to make fully functional devices saving time and additional engineering efforts.
Resumo:
Because the recommendation to use flowables for posterior restorations is still a matter of debate, the objective of this study was to determine in a nationwide survey in Germany how frequently, for what indications, and for what reasons, German dentists use flowable composites in posterior teeth. In addition, the acceptance of a simplified filling technique for posterior restorations using a low stress flowable composite was evaluated. Completed questionnaires from all over Germany were returned by 1,449 dentists resulting in a response rate of 48.5%; 78.6% of whom regularly used flowable composites for posterior restorations. The most frequent indications were cavity lining (80.1%) and small Class I fillings (74.2%). Flowables were less frequently used for small Class II fillings (22.7%) or other indications (13.6%). Most frequent reasons given for the use of flowables in posterior teeth were the prevention of voids (71.7%) and superior adaptation to cavity walls (72.9%), whereas saving time was considered less important (13.8%). Based on the subjective opinion of the dentists the simplified filling technique seemed to deliver advantages compared to the methods used to date particularly with regard to good cavity adaptation and ease of use. In conclusion, resin composites are the standard material type used for posterior restorations by general dental practitioners in Germany and most dentists use flowable composites as liners.