953 resultados para Reliable Computations
Resumo:
Diagnostic techniques based on PCR have two major problems: false-positive reactions due to contamination with DNA fragments from previous PCRs (amplicons) and false-negative reactions caused by inhibitors that interfere with the PCR. We have improved our previously reported PCR based on the amplification of a fragment of the Mycobacterium tuberculosis complex-specific insertion element IS6110 with respect to both problems. False-positive reactions caused by amplicon contamination were prevented by the use of uracil-N-glycosylase and dUTP instead of dTTP. We selected a new set of primers outside the region spanned by the formerly used primers to avoid false-positive reactions caused by dTTP-containing amplicons still present in the laboratory. With this new primer set, 16 copies of the IS6110 insertion element, the equivalent of two bacteria, could be amplified 10(10) times in 40 cycles, resulting in a mean efficiency of 77% per cycle. To detect the presence of inhibitors of the Taq polymerase, which may cause false-negative reactions, part of each sample was spiked with M. tuberculosis DNA. The DNA purification method using guanidinium thiocyanate and diatoms effectively removed most or all inhibitors of the PCR. However, this was not suitable for blood samples, for which we developed a proteinase K treatment followed by phenol-chloroform extraction. This method permitted detection of 20 M. tuberculosis bacteria per ml of whole blood. Various laboratory procedures were introduced to reduce failure or inhibition of PCR and avoid DNA cross contamination. We have tested 218 different clinical specimens obtained from patients suspected of having tuberculosis. The samples included sputum (n=145), tissue biopsy samples (n=25), cerebrospinal fluid (n=15), blood (n=14), pleural fluid (n=9), feces, (n=7), fluid from fistulae (n=2), and pus from a wound (n=1). The results obtained by PCR were consistent with those obtained with culture, which is the "gold standard." We demonstrate that PCR is a useful technique for the rapid diagnosis of tuberculosis at various sites.
Resumo:
The Central Highlands region has a unique climate that presents both challenges and novel farming systems opportunities for cotton production. We have been re-examining the Emerald climate in a bid to identify opportunities that might enable the production of more consistent cotton yields and quality in what can be a highly variable climate. A detailed climatic analysis identified that spring and early summer is the most optimal period for boll growth and maturation. However, to unlock this potential requires unseasonal winter sowing that is 4 to 6 weeks earlier than the traditional mid-September sowing. Our experiments have sought answers to two questions: i) how much earlier can cotton be sown for reliable crop establishment and high yield; ii) can degradable plastic film mulches minimise the impact of potentially cold temperatures on crop establishment and early vigour. Initial data suggests August sowing offers the potential to grow a high yield at a time of year with reduced risk of cloud and high night temperatures during boll growth. For the past two seasons late winter sowing (with and without film) has resulted in a compact plant with high retention that physiologically matures by the beginning of January. Even with the spectre of replanting cotton in some seasons due to frost in August, early sowing would appear to offer the opportunity for more efficient crop input usage, simplified agronomic management and new crop rotation options during late summer and autumn. This talk will present an overview of results to date.
Resumo:
Secure Multi-party Computation (MPC) enables a set of parties to collaboratively compute, using cryptographic protocols, a function over their private data in a way that the participants do not see each other's data, they only see the final output. Typical MPC examples include statistical computations over joint private data, private set intersection, and auctions. While these applications are examples of monolithic MPC, richer MPC applications move between "normal" (i.e., per-party local) and "secure" (i.e., joint, multi-party secure) modes repeatedly, resulting overall in mixed-mode computations. For example, we might use MPC to implement the role of the dealer in a game of mental poker -- the game will be divided into rounds of local decision-making (e.g. bidding) and joint interaction (e.g. dealing). Mixed-mode computations are also used to improve performance over monolithic secure computations. Starting with the Fairplay project, several MPC frameworks have been proposed in the last decade to help programmers write MPC applications in a high-level language, while the toolchain manages the low-level details. However, these frameworks are either not expressive enough to allow writing mixed-mode applications or lack formal specification, and reasoning capabilities, thereby diminishing the parties' trust in such tools, and the programs written using them. Furthermore, none of the frameworks provides a verified toolchain to run the MPC programs, leaving the potential of security holes that can compromise the privacy of parties' data. This dissertation presents language-based techniques to make MPC more practical and trustworthy. First, it presents the design and implementation of a new MPC Domain Specific Language, called Wysteria, for writing rich mixed-mode MPC applications. Wysteria provides several benefits over previous languages, including a conceptual single thread of control, generic support for more than two parties, high-level abstractions for secret shares, and a fully formalized type system and operational semantics. Using Wysteria, we have implemented several MPC applications, including, for the first time, a card dealing application. The dissertation next presents Wys*, an embedding of Wysteria in F*, a full-featured verification oriented programming language. Wys* improves on Wysteria along three lines: (a) It enables programmers to formally verify the correctness and security properties of their programs. As far as we know, Wys* is the first language to provide verification capabilities for MPC programs. (b) It provides a partially verified toolchain to run MPC programs, and finally (c) It enables the MPC programs to use, with no extra effort, standard language constructs from the host language F*, thereby making it more usable and scalable. Finally, the dissertation develops static analyses that help optimize monolithic MPC programs into mixed-mode MPC programs, while providing similar privacy guarantees as the monolithic versions.
Resumo:
International audience
Resumo:
As the semiconductor industry struggles to maintain its momentum down the path following the Moore's Law, three dimensional integrated circuit (3D IC) technology has emerged as a promising solution to achieve higher integration density, better performance, and lower power consumption. However, despite its significant improvement in electrical performance, 3D IC presents several serious physical design challenges. In this dissertation, we investigate physical design methodologies for 3D ICs with primary focus on two areas: low power 3D clock tree design, and reliability degradation modeling and management. Clock trees are essential parts for digital system which dissipate a large amount of power due to high capacitive loads. The majority of existing 3D clock tree designs focus on minimizing the total wire length, which produces sub-optimal results for power optimization. In this dissertation, we formulate a 3D clock tree design flow which directly optimizes for clock power. Besides, we also investigate the design methodology for clock gating a 3D clock tree, which uses shutdown gates to selectively turn off unnecessary clock activities. Different from the common assumption in 2D ICs that shutdown gates are cheap thus can be applied at every clock node, shutdown gates in 3D ICs introduce additional control TSVs, which compete with clock TSVs for placement resources. We explore the design methodologies to produce the optimal allocation and placement for clock and control TSVs so that the clock power is minimized. We show that the proposed synthesis flow saves significant clock power while accounting for available TSV placement area. Vertical integration also brings new reliability challenges including TSV's electromigration (EM) and several other reliability loss mechanisms caused by TSV-induced stress. These reliability loss models involve complex inter-dependencies between electrical and thermal conditions, which have not been investigated in the past. In this dissertation we set up an electrical/thermal/reliability co-simulation framework to capture the transient of reliability loss in 3D ICs. We further derive and validate an analytical reliability objective function that can be integrated into the 3D placement design flow. The reliability aware placement scheme enables co-design and co-optimization of both the electrical and reliability property, thus improves both the circuit's performance and its lifetime. Our electrical/reliability co-design scheme avoids unnecessary design cycles or application of ad-hoc fixes that lead to sub-optimal performance. Vertical integration also enables stacking DRAM on top of CPU, providing high bandwidth and short latency. However, non-uniform voltage fluctuation and local thermal hotspot in CPU layers are coupled into DRAM layers, causing a non-uniform bit-cell leakage (thereby bit flip) distribution. We propose a performance-power-resilience simulation framework to capture DRAM soft error in 3D multi-core CPU systems. In addition, a dynamic resilience management (DRM) scheme is investigated, which adaptively tunes CPU's operating points to adjust DRAM's voltage noise and thermal condition during runtime. The DRM uses dynamic frequency scaling to achieve a resilience borrow-in strategy, which effectively enhances DRAM's resilience without sacrificing performance. The proposed physical design methodologies should act as important building blocks for 3D ICs and push 3D ICs toward mainstream acceptance in the near future.
Resumo:
Mechanical fatigue is a failure phenomenon that occurs due to repeated application of mechanical loads. Very High Cycle Fatigue (VHCF) is considered as the domain of fatigue life greater than 10 million load cycles. Increasing numbers of structural components have service life in the VHCF regime, for instance in automotive and high speed train transportation, gas turbine disks, and components of paper production machinery. Safe and reliable operation of these components depends on the knowledge of their VHCF properties. In this thesis both experimental tools and theoretical modelling were utilized to develop better understanding of the VHCF phenomena. In the experimental part, ultrasonic fatigue testing at 20 kHz of cold rolled and hot rolled stainless steel grades was conducted and fatigue strengths in the VHCF regime were obtained. The mechanisms for fatigue crack initiation and short crack growth were investigated using electron microscopes. For the cold rolled stainless steels crack initiation and early growth occurred through the formation of the Fine Granular Area (FGA) observed on the fracture surface and in TEM observations of cross-sections. The crack growth in the FGA seems to control more than 90% of the total fatigue life. For the hot rolled duplex stainless steels fatigue crack initiation occurred due to accumulation of plastic fatigue damage at the external surface, and early crack growth proceeded through a crystallographic growth mechanism. Theoretical modelling of complex cracks involving kinks and branches in an elastic half-plane under static loading was carried out by using the Distributed Dislocation Dipole Technique (DDDT). The technique was implemented for 2D crack problems. Both fully open and partially closed crack cases were analyzed. The main aim of the development of the DDDT was to compute the stress intensity factors. Accuracy of 2% in the computations was attainable compared to the solutions obtained by the Finite Element Method.
Resumo:
Metadata that is associated with either an information system or an information object for purposes of description, administration, legal requirements, technical functionality, use and usage, and preservation, plays a critical role in ensuring the creation, management, preservation and use and re-use of trustworthymaterials, including records. Recordkeeping1 metadata, of which one key type is archival description, plays a particularly important role in documenting the reliability and authenticity of records and recordkeeping systemsas well as the various contexts (legal-administrative, provenancial, procedural, documentary, and technical) within which records are created and kept as they move across space and time. In the digital environment, metadata is also the means by which it is possible to identify how record components – those constituent aspects of a digital record that may be managed, stored and used separately by the creator or the preserver – can be reassembled to generate an authentic copy of a record or reformulated per a user’s request as a customized output package.Issues relating to the creation, capture, management and preservation of adequate metadata are, therefore, integral to any research study addressing the reliability and authenticity of digital entities, regardless of the community, sector or institution within which they are being created. The InterPARES 2 Description Cross-Domain Group (DCD) examined the conceptualization, definitions, roles, and current functionality of metadata and archival description in terms of requirements generated by InterPARES 12. Because of the needs to communicate the work of InterPARES in a meaningful way across not only other disciplines, but also different archival traditions; to interface with, evaluate and inform existing standards, practices and other research projects; and to ensure interoperability across the three focus areas of InterPARES2, the Description Cross-Domain also addressed its research goals with reference to wider thinking about and developments in recordkeeping and metadata. InterPARES2 addressed not only records, however, but a range of digital information objects (referred to as “entities” by InterPARES 2, but not to be confused with the term “entities” as used in metadata and database applications) that are the products and by-products of government, scientific and artistic activities that are carried out using dynamic, interactive or experiential digital systems. The nature of these entities was determined through a diplomatic analysis undertaken as part of extensive case studies of digital systems that were conducted by the InterPARES 2 Focus Groups. This diplomatic analysis established whether the entities identified during the case studies were records, non-records that nevertheless raised important concerns relating to reliability and authenticity, or “potential records.” To be determined to be records, the entities had to meet the criteria outlined by archival theory – they had to have a fixed documentary format and stable content. It was not sufficient that they be considered to be or treated as records by the creator. “Potential records” is a new construct that indicates that a digital system has the potential to create records upon demand, but does not actually fix and set aside records in the normal course of business. The work of the Description Cross-Domain Group, therefore, addresses the metadata needs for all three categories of entities.Finally, since “metadata” as a term is used today so ubiquitously and in so many different ways by different communities, that it is in peril of losing any specificity, part of the work of the DCD sought to name and type categories of metadata. It also addressed incentives for creators to generate appropriate metadata, as well as issues associated with the retention, maintenance and eventual disposition of the metadata that aggregates around digital entities over time.
Resumo:
Sea ice is a fundamental element of global climate system, with numerous impacts on the polar environment. The ongoing drastic changes in the Earth’s sea ice cover highlight the necessity of monitoring the polar regions and systematically evaluating the quality of different numerical products. The main objective of this thesis is to improve our knowledge of the representation of Arctic and Antarctic sea ice using comprehensive global ocean reanalyses and coupled climate models. The dissertation will explore (i) the Antarctic marginal ice zone (MIZ) and pack ice area in the ensemble mean of four global ocean reanalyses called GREP; (ii) historical representation of the Arctic and Antarctic sea ice state in HighResMIP models; (iii) the future evolution of Arctic sea ice in HighResMIP models. Global ocean reanalyses and GREP are found to adequately capture interannual and seasonal variability in both pack ice and MIZ areas at hemispheric and regional scales. The advantage of the ensemble-mean approach is proved as GREP smooths the strengths and weaknesses of single systems and provides the most consistent and reliable estimates. This work is intended to encourage the use of GREP in a wide range of applications. The analysis of sea ice representation in the coupled climate models shows no systematic impact of the increased horizontal resolution. We argue that a few minor improvements in sea ice representation with the enhanced horizontal resolution are presumably not worth the major effort of costly computations. The thesis highlights the critical importance to distinguish the MIZ from consolidated pack ice both for investigating changes in sea ice distribution and evaluating the product’s performance. Considering that the MIZ is predicted to dominate the Arctic sea ice cover, the model physics parameterizations and sea ice rheology might require modifications. The results of the work can be useful for modelling community.
Resumo:
The idea of Grid Computing originated in the nineties and found its concrete applications in contexts like the SETI@home project where a lot of computers (offered by volunteers) cooperated, performing distributed computations, inside the Grid environment analyzing radio signals trying to find extraterrestrial life. The Grid was composed of traditional personal computers but, with the emergence of the first mobile devices like Personal Digital Assistants (PDAs), researchers started theorizing the inclusion of mobile devices into Grid Computing; although impressive theoretical work was done, the idea was discarded due to the limitations (mainly technological) of mobile devices available at the time. Decades have passed, and now mobile devices are extremely more performant and numerous than before, leaving a great amount of resources available on mobile devices, such as smartphones and tablets, untapped. Here we propose a solution for performing distributed computations over a Grid Computing environment that utilizes both desktop and mobile devices, exploiting the resources from day-to-day mobile users that alternatively would end up unused. The work starts with an introduction on what Grid Computing is, the evolution of mobile devices, the idea of integrating such devices into the Grid and how to convince device owners to participate in the Grid. Then, the tone becomes more technical, starting with an explanation on how Grid Computing actually works, followed by the technical challenges of integrating mobile devices into the Grid. Next, the model, which constitutes the solution offered by this study, is explained, followed by a chapter regarding the realization of a prototype that proves the feasibility of distributed computations over a Grid composed by both mobile and desktop devices. To conclude future developments and ideas to improve this project are presented.
Resumo:
The present paper describes a novel, simple and reliable differential pulse voltammetric method for determining amitriptyline (AMT) in pharmaceutical formulations. It has been described for many authors that this antidepressant is electrochemically inactive at carbon electrodes. However, the procedure proposed herein consisted in electrochemically oxidizing AMT at an unmodified carbon nanotube paste electrode in the presence of 0.1 mol L(-1) sulfuric acid used as electrolyte. At such concentration, the acid facilitated the AMT electroxidation through one-electron transfer at 1.33 V vs. Ag/AgCl, as observed by the augmentation of peak current. Concerning optimized conditions (modulation time 5 ms, scan rate 90 mV s(-1), and pulse amplitude 120 mV) a linear calibration curve was constructed in the range of 0.0-30.0 μmol L(-1), with a correlation coefficient of 0.9991 and a limit of detection of 1.61 μmol L(-1). The procedure was successfully validated for intra- and inter-day precision and accuracy. Moreover, its feasibility was assessed through analysis of commercial pharmaceutical formulations and it has been compared to the UV-vis spectrophotometric method used as standard analytical technique recommended by the Brazilian Pharmacopoeia.
Resumo:
The purpose of this study was to assess the efficacy and reproducibility of the cytologic diagnosis of salivary gland tumors (SGTs) using fine-needle aspiration cytology (FNAC). The study aimed to determine diagnostic accuracy, sensitivity, and specificity and to evaluate the extent of interobserver agreement. We retrospectively evaluated SGTs from the files of the Division of Pathology at the Clinics Hospital of São Paulo and Piracicaba Dental School between 2000 and 2006. We performed cytohistologic correlation in 182 SGTs. The sensitivity, specificity, positive predictive value, negative predictive value, and diagnostic accuracy were 94%, 100%, 100%, 100%, and 99%, respectively. The interobserver cytologic reproducibility showed significant statistical concordance (P < .0001). FNAC is an effective tool for performing a reliable preoperative diagnosis in SGTs and shows high diagnostic accuracy and consistent interobserver reproducibility. Further FNAC studies analyzing large samples of malignant SGTs and reactive salivary lesions are needed to confirm their accuracy.
Resumo:
Quantification of dermal exposure to pesticides in rural workers, used in risk assessment, can be performed with different techniques such as patches or whole body evaluation. However, the wide variety of methods can jeopardize the process by producing disparate results, depending on the principles in sample collection. A critical review was thus performed on the main techniques for quantifying dermal exposure, calling attention to this issue and the need to establish a single methodology for quantification of dermal exposure in rural workers. Such harmonization of different techniques should help achieve safer and healthier working conditions. Techniques that can provide reliable exposure data are an essential first step towards avoiding harm to workers' health.
Resumo:
Balsamic vinegar (BV) is a typical and valuable Italian product, worldwide appreciated thanks to its characteristic flavors and potential health benefits. Several studies have been conducted to assess physicochemical and microbial compositions of BV, as well as its beneficial properties. Due to highly-disseminated claims of antioxidant, antihypertensive and antiglycemic properties, BV is a known target for frauds and adulterations. For that matter, product authentication, certifying its origin (region or country) and thus the processing conditions, is becoming a growing concern. Striving for fraud reduction as well as quality and safety assurance, reliable analytical strategies to rapidly evaluate BV quality are very interesting, also from an economical point of view. This work employs silica plate laser desorption/ionization mass spectrometry (SP-LDI-MS) for fast chemical profiling of commercial BV samples with protected geographical indication (PGI) and identification of its adulterated samples with low-priced vinegars, namely apple, alcohol and red/white wines.
Resumo:
Corynebacterium species (spp.) are among the most frequently isolated pathogens associated with subclinical mastitis in dairy cows. However, simple, fast, and reliable methods for the identification of species of the genus Corynebacterium are not currently available. This study aimed to evaluate the usefulness of matrix-assisted laser desorption ionization/mass spectrometry (MALDI-TOF MS) for identifying Corynebacterium spp. isolated from the mammary glands of dairy cows. Corynebacterium spp. were isolated from milk samples via microbiological culture (n=180) and were analyzed by MALDI-TOF MS and 16S rRNA gene sequencing. Using MALDI-TOF MS methodology, 161 Corynebacterium spp. isolates (89.4%) were correctly identified at the species level, whereas 12 isolates (6.7%) were identified at the genus level. Most isolates that were identified at the species level with 16 S rRNA gene sequencing were identified as Corynebacterium bovis (n=156; 86.7%) were also identified as C. bovis with MALDI-TOF MS. Five Corynebacterium spp. isolates (2.8%) were not correctly identified at the species level with MALDI-TOF MS and 2 isolates (1.1%) were considered unidentified because despite having MALDI-TOF MS scores >2, only the genus level was correctly identified. Therefore, MALDI-TOF MS could serve as an alternative method for species-level diagnoses of bovine intramammary infections caused by Corynebacterium spp.