925 resultados para cache coherence protocols
Resumo:
This study compared dentine demineralization induced by in vitro and in situ models, and correlated dentine surface hardness (SH), cross-sectional hardness (CSH) and mineral content by transverse microradiography (TMR). Bovine dentine specimens (n = 15/group) were demineralized in vitro with the following: MC gel (6% carboxymethylcellulose gel and 0.1 m lactic acid, pH 5.0, 14 days); buffer I (0.05 m acetic acid solution with calcium, phosphate and fluoride, pH 4.5, 7 days); buffer II (0.05 m acetic acid solution with calcium and phosphate, pH 5.0, 7 days), and TEMDP (0.05 m lactic acid with calcium, phosphate and tetraethyl methyl diphosphonate, pH 5.0, 7 days). In an in situ study, 11 volunteers wore palatal appliances containing 2 bovine dentine specimens, protected with a plastic mesh to allow biofilm development. The volunteers dripped a 20% sucrose solution on each specimen 4 times a day for 14 days. In vitro and in situ lesions were analyzed using TMR and statistically compared by ANOVA. TMR and CSH/SH were submitted to regression and correlation analysis (p < 0.05). The in situ model produced a deep lesion with a high R value, but with a thin surface layer. Regarding the in vitro models, MC gel produced only a shallow lesion, while buffers I and II as well as TEMDP induced a pronounced subsurface lesion with deep demineralization. The relationship between CSH and TMR was weak and not linear. The artificial dentine carious lesions induced by the different models differed significantly, which in turn might influence further de- and remineralization processes. Hardness analysis should not be interpreted with respect to dentine mineral loss
Resumo:
Abstract Background Catching an object is a complex movement that involves not only programming but also effective motor coordination. Such behavior is related to the activation and recruitment of cortical regions that participates in the sensorimotor integration process. This study aimed to elucidate the cortical mechanisms involved in anticipatory actions when performing a task of catching an object in free fall. Methods Quantitative electroencephalography (qEEG) was recorded using a 20-channel EEG system in 20 healthy right-handed participants performed the catching ball task. We used the EEG coherence analysis to investigate subdivisions of alpha (8-12 Hz) and beta (12-30 Hz) bands, which are related to cognitive processing and sensory-motor integration. Results Notwithstanding, we found the main effects for the factor block; for alpha-1, coherence decreased from the first to sixth block, and the opposite effect occurred for alpha-2 and beta-2, with coherence increasing along the blocks. Conclusion It was concluded that to perform successfully our task, which involved anticipatory processes (i.e. feedback mechanisms), subjects exhibited a great involvement of sensory-motor and associative areas, possibly due to organization of information to process visuospatial parameters and further catch the falling object.
Resumo:
The striatum, the largest component of the basal ganglia, is usually subdivided into associative, motor and limbic components. However, the electrophysiological interactions between these three subsystems during behavior remain largely unknown. We hypothesized that the striatum might be particularly active during exploratory behavior, which is presumably associated with increased attention. We investigated the modulation of local field potentials (LFPs) in the striatum during attentive wakefulness in freely moving rats. To this end, we implanted microelectrodes into different parts of the striatum of Wistar rats, as well as into the motor, associative and limbic cortices. We then used electromyograms to identify motor activity and analyzed the instantaneous frequency, power spectra and partial directed coherence during exploratory behavior. We observed fine modulation in the theta frequency range of striatal LFPs in 92.5 ± 2.5% of all epochs of exploratory behavior. Concomitantly, the theta power spectrum increased in all striatal channels (P < 0.001), and coherence analysis revealed strong connectivity (coefficients >0.7) between the primary motor cortex and the rostral part of the caudatoputamen nucleus, as well as among all striatal channels (P < 0.001). Conclusively, we observed a pattern of strong theta band activation in the entire striatum during attentive wakefulness, as well as a strong coherence between the motor cortex and the entire striatum. We suggest that this activation reflects the integration of motor, cognitive and limbic systems during attentive wakefulness.
Resumo:
This study aimed to test different protocols for the extraction of microbial DNA from the coral Mussismilia harttii. Four different commercial kits were tested, three of them based on methods for DNA extraction from soil (FastDNA SPIN Kit for soil, MP Bio, PowerSoil DNA Isolation Kit, MoBio, and ZR Soil Microbe DNA Kit, Zymo Research) and one kit for DNA extraction from plants (UltraClean Plant DNA Isolation Kit, MoBio). Five polyps of the same colony of M. harttii were macerated and aliquots were submitted to DNA extraction by the different kits. After extraction, the DNA was quantified and PCR-DGGE was used to study the molecular fingerprint of Bacteria and Eukarya. Among the four kits tested, the ZR Soil Microbe DNA Kit was the most efficient with respect to the amount of DNA extracted, yielding about three times more DNA than the other kits. Also, we observed a higher number and intensities of DGGE bands for both Bacteria and Eukarya with the same kit. Considering these results, we suggested that the ZR Soil Microbe DNA Kit is the best adapted for the study of the microbial communities of corals.
Resumo:
The aim of the present study was to evaluate the effects of the PGF2˛treatment givenat the onset of a synchronization of ovulation protocol using a norgestomet (NORG) earimplant on ovarian follicular dynamics (Experiment 1) and pregnancy per AI (P/AI; Exper-iment 2) in cyclic (CL present) Bos indicus heifers. In Experiment 1, a total of 46 heiferswere presynchronized using two consecutive doses of PGF2˛12 days apart. At first dayof the synchronization protocol the heifers received implants containing 3 mg of NORGand 2 mg of estradiol benzoate (EB). At the same time, heifers were randomly assignedto receive 150 mg of d-cloprostenol (n = 23; PGF2˛) or no additional treatment (n = 23;Control). When the ear implants were removed 8 days later, all heifers received a PGF2˛treatment and 1 mg of EB was given 24 h later. The follicular diameter and interval toovulation were determined by transrectal ultrasonography. No effects of PGF2˛treat-ment on the diameter of the largest follicle present were observed at implant removal(PGF2˛= 9.8 ± 0.4 vs. Control = 10.0 ± 0.3 mm; P = 0.73) or after 24 h (PGF2˛= 11.1 ± 0.4 vs.Control = 11.0 ± 0.4 mm; P = 0.83). No differences in the time of ovulation after ear implantremoval (PGF2˛= 70.8 ± 1.2 vs. Control = 73.3 ± 0.9 h; P = 0.10) or in the ovulation rate(PGF2˛= 87.0 vs. Control = 82.6%; P = 0.64) between treatments were observed. In Experi-ment 2, 280 cyclic heifers were synchronized using the same experimental design describedabove (PGF2˛; n = 143 and Control; n = 137), at random day of the estrous cycle. All heifersreceived 300 IU of equine chorionic gonadotropin (eCG) and 0.5 mg of estradiol cypionate(as ovulatory stimulus) when the NORG ear implants were removed. Timed artificial insem-ination (TAI) was performed 48 h after implant removal and the pregnancy diagnosis wasconducted 30 days later. No effects on the P/AI due to PGF2˛treatment were observed(PGF2˛= 51.7 vs. Control = 57.7%; P = 0.29). In conclusion, PGF2˛treatment at the onset ofNORG-based protocols for the synchronization of ovulation did not alter the ovarian follic-ular responses or the P/AI in cyclic Bos indicus beef heifers synchronized for TAI.
Resumo:
The process for obtaining polypyrrole-2-carboxylic acid (PPY-2-COOH) films in acetonitrile was investigated using cyclic voltammetry, electrochemical quartz crystal microgravimetry (EQCM), and infrared spectroscopy (FTIR). Different potential ranges were applied during cyclic voltammetry experiments with the aim of obtaining films without and with the presence of controlled amounts of water added in acetonitrile. The FTIR spectra of the films have evidenced that cations and anions from the electrolyte solution were incorporated into the PPY-2-COOH structure, with a preferential adsorption of cations. After chemically immobilizing polyphenoloxidase (tyrosinase, PPO), PPY-2-COOH/PPO films were build for amperometric detection of catechol, establishing a linear limit of concentrations ranging from 5.0 x 10-4 to 2.5 x 10-2 mol L-1.
Resumo:
[EN] In this work we propose a new variational model for the consistent estimation of motion fields. The aim of this work is to develop appropriate spatio-temporal coherence models. In this sense, we propose two main contributions: a nonlinear flow constancy assumption, similar in spirit to the nonlinear brightness constancy assumption, which conveniently relates flow fields at different time instants; and a nonlinear temporal regularization scheme, which complements the spatial regularization and can cope with piecewise continuous motion fields. These contributions pose a congruent variational model since all the energy terms, except the spatial regularization, are based on nonlinear warpings of the flow field. This model is more general than its spatial counterpart, provides more accurate solutions and preserves the continuity of optical flows in time. In the experimental results, we show that the method attains better results and, in particular, it considerably improves the accuracy in the presence of large displacements.
Resumo:
Interaction protocols establish how different computational entities can interact with each other. The interaction can be finalized to the exchange of data, as in 'communication protocols', or can be oriented to achieve some result, as in 'application protocols'. Moreover, with the increasing complexity of modern distributed systems, protocols are used also to control such a complexity, and to ensure that the system as a whole evolves with certain features. However, the extensive use of protocols has raised some issues, from the language for specifying them to the several verification aspects. Computational Logic provides models, languages and tools that can be effectively adopted to address such issues: its declarative nature can be exploited for a protocol specification language, while its operational counterpart can be used to reason upon such specifications. In this thesis we propose a proof-theoretic framework, called SCIFF, together with its extensions. SCIFF is based on Abductive Logic Programming, and provides a formal specification language with a clear declarative semantics (based on abduction). The operational counterpart is given by a proof procedure, that allows to reason upon the specifications and to test the conformance of given interactions w.r.t. a defined protocol. Moreover, by suitably adapting the SCIFF Framework, we propose solutions for addressing (1) the protocol properties verification (g-SCIFF Framework), and (2) the a-priori conformance verification of peers w.r.t. the given protocol (AlLoWS Framework). We introduce also an agent based architecture, the SCIFF Agent Platform, where the same protocol specification can be used to program and to ease the implementation task of the interacting peers.
Resumo:
The scale down of transistor technology allows microelectronics manufacturers such as Intel and IBM to build always more sophisticated systems on a single microchip. The classical interconnection solutions based on shared buses or direct connections between the modules of the chip are becoming obsolete as they struggle to sustain the increasing tight bandwidth and latency constraints that these systems demand. The most promising solution for the future chip interconnects are the Networks on Chip (NoC). NoCs are network composed by routers and channels used to inter- connect the different components installed on the single microchip. Examples of advanced processors based on NoC interconnects are the IBM Cell processor, composed by eight CPUs that is installed on the Sony Playstation III and the Intel Teraflops pro ject composed by 80 independent (simple) microprocessors. On chip integration is becoming popular not only in the Chip Multi Processor (CMP) research area but also in the wider and more heterogeneous world of Systems on Chip (SoC). SoC comprehend all the electronic devices that surround us such as cell-phones, smart-phones, house embedded systems, automotive systems, set-top boxes etc... SoC manufacturers such as ST Microelectronics , Samsung, Philips and also Universities such as Bologna University, M.I.T., Berkeley and more are all proposing proprietary frameworks based on NoC interconnects. These frameworks help engineers in the switch of design methodology and speed up the development of new NoC-based systems on chip. In this Thesis we propose an introduction of CMP and SoC interconnection networks. Then focusing on SoC systems we propose: • a detailed analysis based on simulation of the Spidergon NoC, a ST Microelectronics solution for SoC interconnects. The Spidergon NoC differs from many classical solutions inherited from the parallel computing world. Here we propose a detailed analysis of this NoC topology and routing algorithms. Furthermore we propose aEqualized a new routing algorithm designed to optimize the use of the resources of the network while also increasing its performance; • a methodology flow based on modified publicly available tools that combined can be used to design, model and analyze any kind of System on Chip; • a detailed analysis of a ST Microelectronics-proprietary transport-level protocol that the author of this Thesis helped developing; • a simulation-based comprehensive comparison of different network interface designs proposed by the author and the researchers at AST lab, in order to integrate shared-memory and message-passing based components on a single System on Chip; • a powerful and flexible solution to address the time closure exception issue in the design of synchronous Networks on Chip. Our solution is based on relay stations repeaters and allows to reduce the power and area demands of NoC interconnects while also reducing its buffer needs; • a solution to simplify the design of the NoC by also increasing their performance and reducing their power and area consumption. We propose to replace complex and slow virtual channel-based routers with multiple and flexible small Multi Plane ones. This solution allows us to reduce the area and power dissipation of any NoC while also increasing its performance especially when the resources are reduced. This Thesis has been written in collaboration with the Advanced System Technology laboratory in Grenoble France, and the Computer Science Department at Columbia University in the city of New York.
Resumo:
The research performed during the PhD candidature was intended to evaluate the quality of white wines, as a function of the reduction in SO2 use during the first steps of the winemaking process. In order to investigate the mechanism and intensity of interactions occurring between lysozyme and the principal macro-components of musts and wines, a series of experiments on model wine solutions were undertaken, focusing attention on the polyphenols, SO2, oenological tannins, pectines, ethanol, and sugar components. In the second part of this research program, a series of conventional sulphite added vinifications were compared to vinifications in which sulphur dioxide was replaced by lysozyme and consequently define potential winemaking protocols suitable for the production of SO2-free wines. To reach the final goal, the technological performance of two selected yeast strains with a low aptitude to produce SO2 during fermentation were also evaluated. The data obtained suggested that the addition of lysozyme and oenological tannins during the alcoholic fermentation could represent a promising alternative to the use of sulphur dioxide and a reliable starting point for the production of SO2-free wines. The different vinification protocols studied influenced the composition of the volatile profile in wines at the end of the alcoholic fermentation, especially with regards to alcohols and ethyl esters also a consequence of the yeast’s response to the presence or absence of sulphites during fermentation, contributing in different ways to the sensory profiles of wines. In fact, the aminoacids analysis showed that lysozyme can affect the consumption of nitrogen as a function of the yeast strain used in fermentation. During the bottle storage, the evolution of volatile compounds is affected by the presence of SO2 and oenological tannins, confirming their positive role in scaveging oxygen and maintaining the amounts of esters over certain levels, avoiding a decline in the wine’s quality. Even though a natural decrease was found on phenolic profiles due to oxidation effects caused by the presence of oxygen dissolved in the medium during the storage period, the presence of SO2 together with tannins contrasted the decay of phenolic content at the end of the fermentation. Tannins also showed a central role in preserving the polyphenolic profile of wines during the storage period, confirming their antioxidant property, acting as reductants. Our study focused on the fundamental chemistry relevant to the oxidative phenolic spoilage of white wines has demonstrated the suitability of glutathione to inhibit the production of yellow xanthylium cation pigments generated from flavanols and glyoxylic acid at the concentration that it typically exists in wine. The ability of glutathione to bind glyoxylic acid rather than acetaldehyde may enable glutathione to be used as a ‘switch’ for glyoxylic acid-induced polymerisation mechanisms, as opposed to the equivalent acetaldehyde polymerisation, in processes such as microoxidation. Further research is required to assess the ability of glutathione to prevent xanthylium cation production during the in-situ production of glyoxylic acid and in the presence of sulphur dioxide.
Resumo:
The aim of this thesis was to describe the development of motion analysis protocols for applications on upper and lower limb extremities, by using inertial sensors-based systems. Inertial sensors-based systems are relatively recent. Knowledge and development of methods and algorithms for the use of such systems for clinical purposes is therefore limited if compared with stereophotogrammetry. However, their advantages in terms of low cost, portability, small size, are a valid reason to follow this direction. When developing motion analysis protocols based on inertial sensors, attention must be given to several aspects, like the accuracy of inertial sensors-based systems and their reliability. The need to develop specific algorithms/methods and software for using these systems for specific applications, is as much important as the development of motion analysis protocols based on them. For this reason, the goal of the 3-years research project described in this thesis was achieved first of all trying to correctly design the protocols based on inertial sensors, in terms of exploring and developing which features were suitable for the specific application of the protocols. The use of optoelectronic systems was necessary because they provided a gold standard and accurate measurement, which was used as a reference for the validation of the protocols based on inertial sensors. The protocols described in this thesis can be particularly helpful for rehabilitation centers in which the high cost of instrumentation or the limited working areas do not allow the use of stereophotogrammetry. Moreover, many applications requiring upper and lower limb motion analysis to be performed outside the laboratories will benefit from these protocols, for example performing gait analysis along the corridors. Out of the buildings, the condition of steady-state walking or the behavior of the prosthetic devices when encountering slopes or obstacles during walking can also be assessed. The application of inertial sensors on lower limb amputees presents conditions which are challenging for magnetometer-based systems, due to ferromagnetic material commonly adopted for the construction of idraulic components or motors. INAIL Prostheses Centre stimulated and, together with Xsens Technologies B.V. supported the development of additional methods for improving the accuracy of MTx in measuring the 3D kinematics for lower limb prostheses, with the results provided in this thesis. In the author’s opinion, this thesis and the motion analysis protocols based on inertial sensors here described, are a demonstration of how a strict collaboration between the industry, the clinical centers, the research laboratories, can improve the knowledge, exchange know-how, with the common goal to develop new application-oriented systems.
Resumo:
Cost, performance and availability considerations are forcing even the most conservative high-integrity embedded real-time systems industry to migrate from simple hardware processors to ones equipped with caches and other acceleration features. This migration disrupts the practices and solutions that industry had developed and consolidated over the years to perform timing analysis. Industry that are confident with the efficiency/effectiveness of their verification and validation processes for old-generation processors, do not have sufficient insight on the effects of the migration to cache-equipped processors. Caches are perceived as an additional source of complexity, which has potential for shattering the guarantees of cost- and schedule-constrained qualification of their systems. The current industrial approach to timing analysis is ill-equipped to cope with the variability incurred by caches. Conversely, the application of advanced WCET analysis techniques on real-world industrial software, developed without analysability in mind, is hardly feasible. We propose a development approach aimed at minimising the cache jitters, as well as at enabling the application of advanced WCET analysis techniques to industrial systems. Our approach builds on:(i) identification of those software constructs that may impede or complicate timing analysis in industrial-scale systems; (ii) elaboration of practical means, under the model-driven engineering (MDE) paradigm, to enforce the automated generation of software that is analyzable by construction; (iii) implementation of a layout optimisation method to remove cache jitters stemming from the software layout in memory, with the intent of facilitating incremental software development, which is of high strategic interest to industry. The integration of those constituents in a structured approach to timing analysis achieves two interesting properties: the resulting software is analysable from the earliest releases onwards - as opposed to becoming so only when the system is final - and more easily amenable to advanced timing analysis by construction, regardless of the system scale and complexity.
Resumo:
Biomedical analyses are becoming increasingly complex, with respect to both the type of the data to be produced and the procedures to be executed. This trend is expected to continue in the future. The development of information and protocol management systems that can sustain this challenge is therefore becoming an essential enabling factor for all actors in the field. The use of custom-built solutions that require the biology domain expert to acquire or procure software engineering expertise in the development of the laboratory infrastructure is not fully satisfactory because it incurs undesirable mutual knowledge dependencies between the two camps. We propose instead an infrastructure concept that enables the domain experts to express laboratory protocols using proper domain knowledge, free from the incidence and mediation of the software implementation artefacts. In the system that we propose this is made possible by basing the modelling language on an authoritative domain specific ontology and then using modern model-driven architecture technology to transform the user models in software artefacts ready for execution in a multi-agent based execution platform specialized for biomedical laboratories.
Resumo:
The Internet of Things (IoT) is the next industrial revolution: we will interact naturally with real and virtual devices as a key part of our daily life. This technology shift is expected to be greater than the Web and Mobile combined. As extremely different technologies are needed to build connected devices, the Internet of Things field is a junction between electronics, telecommunications and software engineering. Internet of Things application development happens in silos, often using proprietary and closed communication protocols. There is the common belief that only if we can solve the interoperability problem we can have a real Internet of Things. After a deep analysis of the IoT protocols, we identified a set of primitives for IoT applications. We argue that each IoT protocol can be expressed in term of those primitives, thus solving the interoperability problem at the application protocol level. Moreover, the primitives are network and transport independent and make no assumption in that regard. This dissertation presents our implementation of an IoT platform: the Ponte project. Privacy issues follows the rise of the Internet of Things: it is clear that the IoT must ensure resilience to attacks, data authentication, access control and client privacy. We argue that it is not possible to solve the privacy issue without solving the interoperability problem: enforcing privacy rules implies the need to limit and filter the data delivery process. However, filtering data require knowledge of how the format and the semantics of the data: after an analysis of the possible data formats and representations for the IoT, we identify JSON-LD and the Semantic Web as the best solution for IoT applications. Then, this dissertation present our approach to increase the throughput of filtering semantic data by a factor of ten.
Resumo:
Wireless sensor networks (WSNs) consist of a large number of sensor nodes, characterized by low power constraint, limited transmission range and limited computational capabilities [1][2].The cost of these devices is constantly decreasing, making it possible to use a large number of sensor devices in a wide array of commercial, environmental, military, and healthcare fields. Some of these applications involve placing the sensors evenly spaced on a straight line for example in roads, bridges, tunnels, water catchments and water pipelines, city drainages, oil and gas pipelines etc., making a special class of these networks which we define as a Linear Wireless Network (LWN). In LWNs, data transmission happens hop by hop from the source to the destination, through a route composed of multiple relays. The peculiarity of the topology of LWNs, motivates the design of specialized protocols, taking advantage of the linearity of such networks, in order to increase reliability, communication efficiency, energy savings, network lifetime and to minimize the end-to-end delay [3]. In this thesis a novel contention based Medium Access Control (MAC) protocol called L-CSMA, specifically devised for LWNs is presented. The basic idea of L-CSMA is to assign different priorities to nodes based on their position along the line. The priority is assigned in terms of sensing duration, whereby nodes closer to the destination are assigned shorter sensing time compared to the rest of the nodes and hence higher priority. This mechanism speeds up the transmission of packets which are already in the path, making transmission flow more efficient. Using NS-3 simulator, the performance of L-CSMA in terms of packets success rate, that is, the percentage of packets that reach destination, and throughput are compared with that of IEEE 802.15.4 MAC protocol, de-facto standard for wireless sensor networks. In general, L-CSMA outperforms the IEEE 802.15.4 MAC protocol.