929 resultados para Current systems
Resumo:
We performed the initial assessment of an alternative pressurized intraventilated (PIV) caging system for laboratory mice that uses direct-current microfans to achieve cage pressurization and ventilation. Twenty-nine pairs of female SPF BALB/c mice were used, with 19 experimental pairs kept in Ply cages and 10 control pairs kept in regular filter-top (FT) cages. Both groups were housed in a standard housing room with a conventional atmospheric control system. For both systems, intracage temperatures were in equilibrium with ambient room temperature. PIV cages showed a significant difference in pressure between days 1 and 8. Air speed (and consequently airflow rate) and the number of air changes hourly in the PIV cages showed decreasing trends. In both systems, ammonia concentrations increased with time, with significant differences between groups starting on day 1. Overall, the data revealed that intracage pressurization and ventilation by using microfans is a simple, reliable system, with low cost, maintenance requirements, and incidence of failures. Further experiments are needed to determine the potential influence of this system on the reproductive performance and pulmonary integrity in mice.
Resumo:
20 years after the discovery of the first planets outside our solar system, the current exoplanetary population includes more than 700 confirmed planets around main sequence stars. Approximately 50% belong to multiple-planet systems in very diverse dynamical configurations, from two-planet hierarchical systems to multiple resonances that could only have been attained as the consequence of a smooth large-scale orbital migration. The first part of this paper reviews the main detection techniques employed for the detection and orbital characterization of multiple-planet systems, from the (now) classical radial velocity (RV) method to the use of transit time variations (TTV) for the identification of additional planetary bodies orbiting the same star. In the second part we discuss the dynamical evolution of multi-planet systems due to their mutual gravitational interactions. We analyze possible modes of motion for hierarchical, secular or resonant configurations, and what stability criteria can be defined in each case. In some cases, the dynamics can be well approximated by simple analytical expressions for the Hamiltonian function, while other configurations can only be studied with semi-analytical or numerical tools. In particular, we show how mean-motion resonances can generate complex structures in the phase space where different libration islands and circulation domains are separated by chaotic layers. In all cases we use real exoplanetary systems as working examples.
Resumo:
We present a simultaneous optical signal-to-noise ratio (OSNR) and differential group delay (DGD) monitoring method based on degree of polarization (DOP) measurements in optical communications systems. For the first time in the literature (to our best knowledge), the proposed scheme is demonstrated to be able to independently and simultaneously extract OSNR and DGD values from the DOP measurements. This is possible because the OSNR is related to maximum DOP, while DGD is related to the ratio between the maximum and minimum values of DOP. We experimentally measured OSNR and DGD in the ranges from 10 to 30 dB and 0 to 90 ps for a 10 Gb/s non-return-to-zero signal. A theoretical analysis of DOP accuracy needed to measure low values of DGD and high OSNRs is carried out, showing that current polarimeter technology is capable of yielding an OSNR measurement within 1 dB accuracy, for OSNR values up to 34 dB, while DGD error is limited to 1.5% for DGD values above 10 ps. For the first time to our knowledge, the technique was demonstrated to accurately measure first-order polarization mode dispersion (PMD) in the presence of a high value of second-order PMD (as high as 2071 ps(2)). (C) 2012 Optical Society of America
Resumo:
A new tri-electrode probe is presented and applied to local electrochemical impedance spectroscopy (LEIS) measurements. As opposed to two-probe systems, the three-probe one allows measurement not only of normal, but also of radial contributions of local current densities to the local impedance values. The results concerning the cases of the blocking electrode and the electrode with faradaic reaction are discussed from the theoretical point of view for a disk electrode. Numerical simulations and experimental results are compared for the case of the ferri/ferrocyanide electrode reaction at the Pt working electrode disk. At the centre of the disk, the impedance taking into account both normal and radial contributions was in good agreement with the local impedance measured in terms of only the normal contribution. At the periphery of the electrode, the impedance taking into account both normal and radial contributions differed significantly from the local impedance measured in terms of only the normal contribution. The radial impedance results at the periphery of the electrode are in good agreement with the usual explanation that the associated larger current density is attributed to the geometry of the electrode, which exhibits a greater accessibility at the electrode edge. (C) 2011 Elsevier Ltd. All rights reserved.
Resumo:
Background and aims: Although studies have shown association of birth weight (BW) and adult body mass index (BMI) with insulin sensitivity in adults, there is limited evidence that BW is associated with insulin secretion. We assessed the associations between BW and current BMI with insulin sensitivity and secretion in young Latin American adults. Methods and results: Two birth cohorts, one from Ribeirao Preto, Brazil, based on 1984 participants aged 23-25 years, and another from Limache, Chile, based on 965 participants aged 22-28 years were studied. Weight and height at birth, and current fasting plasma glucose and insulin levels were measured. Insulin sensitivity (HOMA%S) and secretion (HOMA%beta) were estimated using the Homeostatic Model Assessment (HOMA2). Multiple linear regression analyses were carried out to test the associations between BW and adult BMI z-scores on log HOMA%S and log HOMA%beta. BW z-score was associated with HOMA%S in the two populations and HOMA%beta in Ribeirao Preto when adult BMI z-score was included in the model. BW z-score was associated with decreasing insulin secretion even without adjusting for adult BMI, but only in Ribeirao Preto. BMI z-score was associated with low HOMA%S and high HOMA%beta. No interactions between BW and BMI z-scores on insulin sensitivity were shown. Conclusions: This study supports the finding that BW may affect insulin sensitivity and secretion in young adults. The effect size of BW on insulin status is small in comparison to current BMI. (C) 2010 Elsevier B.V. All rights reserved.
Resumo:
Current scientific applications have been producing large amounts of data. The processing, handling and analysis of such data require large-scale computing infrastructures such as clusters and grids. In this area, studies aim at improving the performance of data-intensive applications by optimizing data accesses. In order to achieve this goal, distributed storage systems have been considering techniques of data replication, migration, distribution, and access parallelism. However, the main drawback of those studies is that they do not take into account application behavior to perform data access optimization. This limitation motivated this paper which applies strategies to support the online prediction of application behavior in order to optimize data access operations on distributed systems, without requiring any information on past executions. In order to accomplish such a goal, this approach organizes application behaviors as time series and, then, analyzes and classifies those series according to their properties. By knowing properties, the approach selects modeling techniques to represent series and perform predictions, which are, later on, used to optimize data access operations. This new approach was implemented and evaluated using the OptorSim simulator, sponsored by the LHC-CERN project and widely employed by the scientific community. Experiments confirm this new approach reduces application execution time in about 50 percent, specially when handling large amounts of data.
Resumo:
The development of cloud computing services is speeding up the rate in which the organizations outsource their computational services or sell their idle computational resources. Even though migrating to the cloud remains a tempting trend from a financial perspective, there are several other aspects that must be taken into account by companies before they decide to do so. One of the most important aspect refers to security: while some cloud computing security issues are inherited from the solutions adopted to create such services, many new security questions that are particular to these solutions also arise, including those related to how the services are organized and which kind of service/data can be placed in the cloud. Aiming to give a better understanding of this complex scenario, in this article we identify and classify the main security concerns and solutions in cloud computing, and propose a taxonomy of security in cloud computing, giving an overview of the current status of security in this emerging technology.
Resumo:
The Drug Reaction with Eosinophilia and Systemic Symptoms syndrome, also known as Drug Induced Hypersensitivity Syndrome presents clinically as an extensive mucocutaneous rash, accompanied by fever, lymphadenopathy, hepatitis, hematologic abnormalities with eosinophilia and atypical lymphocytes, and may involve other organs with eosinophilic infiltration, causing damage to several systems, especially to the kidneys, heart, lungs, and pancreas. Recognition of this syndrome is of paramount importance, since the mortality rate is about 10% to 20%, and a specific therapy may be necessary. The pathogenesis is related to specific drugs, especially the aromatic anticonvulsants, altered immune response, sequential reactivation of herpes virus and association with HLA alleles. Early recognition of the syndrome and withdrawal of the offending drug are the most important and essential steps in the treatment of affected patients. Corticosteroids are the basis of the treatment of the syndrome, which may be associated with intravenous immunoglobulin and, in selected cases, Ganciclovir. The article reviews the current concepts involving this important manifestation of adverse drug reaction.
Resumo:
In the last years, Intelligent Tutoring Systems have been a very successful way for improving learning experience. Many issues must be addressed until this technology can be defined mature. One of the main problems within the Intelligent Tutoring Systems is the process of contents authoring: knowledge acquisition and manipulation processes are difficult tasks because they require a specialised skills on computer programming and knowledge engineering. In this thesis we discuss a general framework for knowledge management in an Intelligent Tutoring System and propose a mechanism based on first order data mining to partially automate the process of knowledge acquisition that have to be used in the ITS during the tutoring process. Such a mechanism can be applied in Constraint Based Tutor and in the Pseudo-Cognitive Tutor. We design and implement a part of the proposed architecture, mainly the module of knowledge acquisition from examples based on first order data mining. We then show that the algorithm can be applied at least two different domains: first order algebra equation and some topics of C programming language. Finally we discuss the limitation of current approach and the possible improvements of the whole framework.
Resumo:
The sustained demand for faster,more powerful chips has beenmet by the availability of chip manufacturing processes allowing for the integration of increasing numbers of computation units onto a single die. The resulting outcome, especially in the embedded domain, has often been called SYSTEM-ON-CHIP (SOC) or MULTI-PROCESSOR SYSTEM-ON-CHIP (MPSOC). MPSoC design brings to the foreground a large number of challenges, one of the most prominent of which is the design of the chip interconnection. With a number of on-chip blocks presently ranging in the tens, and quickly approaching the hundreds, the novel issue of how to best provide on-chip communication resources is clearly felt. NETWORKS-ON-CHIPS (NOCS) are the most comprehensive and scalable answer to this design concern. By bringing large-scale networking concepts to the on-chip domain, they guarantee a structured answer to present and future communication requirements. The point-to-point connection and packet switching paradigms they involve are also of great help in minimizing wiring overhead and physical routing issues. However, as with any technology of recent inception, NoC design is still an evolving discipline. Several main areas of interest require deep investigation for NoCs to become viable solutions: • The design of the NoC architecture needs to strike the best tradeoff among performance, features and the tight area and power constraints of the on-chip domain. • Simulation and verification infrastructure must be put in place to explore, validate and optimize the NoC performance. • NoCs offer a huge design space, thanks to their extreme customizability in terms of topology and architectural parameters. Design tools are needed to prune this space and pick the best solutions. • Even more so given their global, distributed nature, it is essential to evaluate the physical implementation of NoCs to evaluate their suitability for next-generation designs and their area and power costs. This dissertation focuses on all of the above points, by describing a NoC architectural implementation called ×pipes; a NoC simulation environment within a cycle-accurate MPSoC emulator called MPARM; a NoC design flow consisting of a front-end tool for optimal NoC instantiation, called SunFloor, and a set of back-end facilities for the study of NoC physical implementations. This dissertation proves the viability of NoCs for current and upcoming designs, by outlining their advantages (alongwith a fewtradeoffs) and by providing a full NoC implementation framework. It also presents some examples of additional extensions of NoCs, allowing e.g. for increased fault tolerance, and outlines where NoCsmay find further application scenarios, such as in stacked chips.
Resumo:
In the last years of research, I focused my studies on different physiological problems. Together with my supervisors, I developed/improved different mathematical models in order to create valid tools useful for a better understanding of important clinical issues. The aim of all this work is to develop tools for learning and understanding cardiac and cerebrovascular physiology as well as pathology, generating research questions and developing clinical decision support systems useful for intensive care unit patients. I. ICP-model Designed for Medical Education We developed a comprehensive cerebral blood flow and intracranial pressure model to simulate and study the complex interactions in cerebrovascular dynamics caused by multiple simultaneous alterations, including normal and abnormal functional states of auto-regulation of the brain. Individual published equations (derived from prior animal and human studies) were implemented into a comprehensive simulation program. Included in the normal physiological modelling was: intracranial pressure, cerebral blood flow, blood pressure, and carbon dioxide (CO2) partial pressure. We also added external and pathological perturbations, such as head up position and intracranial haemorrhage. The model performed clinically realistically given inputs of published traumatized patients, and cases encountered by clinicians. The pulsatile nature of the output graphics was easy for clinicians to interpret. The manoeuvres simulated include changes of basic physiological inputs (e.g. blood pressure, central venous pressure, CO2 tension, head up position, and respiratory effects on vascular pressures) as well as pathological inputs (e.g. acute intracranial bleeding, and obstruction of cerebrospinal outflow). Based on the results, we believe the model would be useful to teach complex relationships of brain haemodynamics and study clinical research questions such as the optimal head-up position, the effects of intracranial haemorrhage on cerebral haemodynamics, as well as the best CO2 concentration to reach the optimal compromise between intracranial pressure and perfusion. We believe this model would be useful for both beginners and advanced learners. It could be used by practicing clinicians to model individual patients (entering the effects of needed clinical manipulations, and then running the model to test for optimal combinations of therapeutic manoeuvres). II. A Heterogeneous Cerebrovascular Mathematical Model Cerebrovascular pathologies are extremely complex, due to the multitude of factors acting simultaneously on cerebral haemodynamics. In this work, the mathematical model of cerebral haemodynamics and intracranial pressure dynamics, described in the point I, is extended to account for heterogeneity in cerebral blood flow. The model includes the Circle of Willis, six regional districts independently regulated by autoregulation and CO2 reactivity, distal cortical anastomoses, venous circulation, the cerebrospinal fluid circulation, and the intracranial pressure-volume relationship. Results agree with data in the literature and highlight the existence of a monotonic relationship between transient hyperemic response and the autoregulation gain. During unilateral internal carotid artery stenosis, local blood flow regulation is progressively lost in the ipsilateral territory with the presence of a steal phenomenon, while the anterior communicating artery plays the major role to redistribute the available blood flow. Conversely, distal collateral circulation plays a major role during unilateral occlusion of the middle cerebral artery. In conclusion, the model is able to reproduce several different pathological conditions characterized by heterogeneity in cerebrovascular haemodynamics and can not only explain generalized results in terms of physiological mechanisms involved, but also, by individualizing parameters, may represent a valuable tool to help with difficult clinical decisions. III. Effect of Cushing Response on Systemic Arterial Pressure. During cerebral hypoxic conditions, the sympathetic system causes an increase in arterial pressure (Cushing response), creating a link between the cerebral and the systemic circulation. This work investigates the complex relationships among cerebrovascular dynamics, intracranial pressure, Cushing response, and short-term systemic regulation, during plateau waves, by means of an original mathematical model. The model incorporates the pulsating heart, the pulmonary circulation and the systemic circulation, with an accurate description of the cerebral circulation and the intracranial pressure dynamics (same model as in the first paragraph). Various regulatory mechanisms are included: cerebral autoregulation, local blood flow control by oxygen (O2) and/or CO2 changes, sympathetic and vagal regulation of cardiovascular parameters by several reflex mechanisms (chemoreceptors, lung-stretch receptors, baroreceptors). The Cushing response has been described assuming a dramatic increase in sympathetic activity to vessels during a fall in brain O2 delivery. With this assumption, the model is able to simulate the cardiovascular effects experimentally observed when intracranial pressure is artificially elevated and maintained at constant level (arterial pressure increase and bradicardia). According to the model, these effects arise from the interaction between the Cushing response and the baroreflex response (secondary to arterial pressure increase). Then, patients with severe head injury have been simulated by reducing intracranial compliance and cerebrospinal fluid reabsorption. With these changes, oscillations with plateau waves developed. In these conditions, model results indicate that the Cushing response may have both positive effects, reducing the duration of the plateau phase via an increase in cerebral perfusion pressure, and negative effects, increasing the intracranial pressure plateau level, with a risk of greater compression of the cerebral vessels. This model may be of value to assist clinicians in finding the balance between clinical benefits of the Cushing response and its shortcomings. IV. Comprehensive Cardiopulmonary Simulation Model for the Analysis of Hypercapnic Respiratory Failure We developed a new comprehensive cardiopulmonary model that takes into account the mutual interactions between the cardiovascular and the respiratory systems along with their short-term regulatory mechanisms. The model includes the heart, systemic and pulmonary circulations, lung mechanics, gas exchange and transport equations, and cardio-ventilatory control. Results show good agreement with published patient data in case of normoxic and hyperoxic hypercapnia simulations. In particular, simulations predict a moderate increase in mean systemic arterial pressure and heart rate, with almost no change in cardiac output, paralleled by a relevant increase in minute ventilation, tidal volume and respiratory rate. The model can represent a valid tool for clinical practice and medical research, providing an alternative way to experience-based clinical decisions. In conclusion, models are not only capable of summarizing current knowledge, but also identifying missing knowledge. In the former case they can serve as training aids for teaching the operation of complex systems, especially if the model can be used to demonstrate the outcome of experiments. In the latter case they generate experiments to be performed to gather the missing data.
Resumo:
In the present study we analyzed new neuroprotective therapeutical strategies in PD (Parkinson’s disease) and AD (Alzheimer’s disease). Current therapeutic strategies for treating PD and AD offer mainly transient symptomatic relief but it is still impossible to block the loss of neuron and then the progression of PD and AD. There is considerable consensus that the increased production and/or aggregation of α- synuclein (α-syn) and β-amyloid peptide (Aβ), plays a central role in the pathogenesis of PD, related synucleinopathies and AD. Therefore, we identified antiamyloidogenic compounds and we tested their effect as neuroprotective drug-like molecules against α-syn and β-amyloid cytotoxicity in PC12. Herein, we show that two nitro-catechol compounds (entacapone and tolcapone) and 5 cathecol-containing compounds (dopamine, pyrogallol, gallic acid, caffeic acid and quercetin) with antioxidant and anti-inflammatory properties, are potent inhibitors of α-syn and β-amyloid oligomerization and fibrillization. Subsequently, we show that the inhibition of α-syn and β-amyloid oligomerization and fibrillization is correlated with the neuroprotection of these compounds against the α-syn and β-amyloid-induced cytotoxicity in PC12. Finally, we focused on the study of the neuroprotective role of microglia and on the possibility that the neuroprotection properties of these cells could be use as therapeutical strategy in PD and AD. Here, we have used an in vitro model to demonstrate neuroprotection of a 48 h-microglial conditioned medium (MCM) towards cerebellar granule neurons (CGNs) challenged with the neurotoxin 6-hydroxydopamine (6-OHDA), which induces a Parkinson-like neurodegeneration, with Aβ42, which induces a Alzheimer-like neurodegeneration, and glutamate, involved in the major neurodegenerative diseases. We show that MCM nearly completely protects CGNs from 6-OHDA neurotoxicity, partially from glutamate excitotoxicity but not from Aβ42 toxin.
Resumo:
Cost, performance and availability considerations are forcing even the most conservative high-integrity embedded real-time systems industry to migrate from simple hardware processors to ones equipped with caches and other acceleration features. This migration disrupts the practices and solutions that industry had developed and consolidated over the years to perform timing analysis. Industry that are confident with the efficiency/effectiveness of their verification and validation processes for old-generation processors, do not have sufficient insight on the effects of the migration to cache-equipped processors. Caches are perceived as an additional source of complexity, which has potential for shattering the guarantees of cost- and schedule-constrained qualification of their systems. The current industrial approach to timing analysis is ill-equipped to cope with the variability incurred by caches. Conversely, the application of advanced WCET analysis techniques on real-world industrial software, developed without analysability in mind, is hardly feasible. We propose a development approach aimed at minimising the cache jitters, as well as at enabling the application of advanced WCET analysis techniques to industrial systems. Our approach builds on:(i) identification of those software constructs that may impede or complicate timing analysis in industrial-scale systems; (ii) elaboration of practical means, under the model-driven engineering (MDE) paradigm, to enforce the automated generation of software that is analyzable by construction; (iii) implementation of a layout optimisation method to remove cache jitters stemming from the software layout in memory, with the intent of facilitating incremental software development, which is of high strategic interest to industry. The integration of those constituents in a structured approach to timing analysis achieves two interesting properties: the resulting software is analysable from the earliest releases onwards - as opposed to becoming so only when the system is final - and more easily amenable to advanced timing analysis by construction, regardless of the system scale and complexity.
Resumo:
The evolution of the electronics embedded applications forces electronics systems designers to match their ever increasing requirements. This evolution pushes the computational power of digital signal processing systems, as well as the energy required to accomplish the computations, due to the increasing mobility of such applications. Current approaches used to match these requirements relies on the adoption of application specific signal processors. Such kind of devices exploits powerful accelerators, which are able to match both performance and energy requirements. On the other hand, the too high specificity of such accelerators often results in a lack of flexibility which affects non-recurrent engineering costs, time to market, and market volumes too. The state of the art mainly proposes two solutions to overcome these issues with the ambition of delivering reasonable performance and energy efficiency: reconfigurable computing and multi-processors computing. All of these solutions benefits from the post-fabrication programmability, that definitively results in an increased flexibility. Nevertheless, the gap between these approaches and dedicated hardware is still too high for many application domains, especially when targeting the mobile world. In this scenario, flexible and energy efficient acceleration can be achieved by merging these two computational paradigms, in order to address all the above introduced constraints. This thesis focuses on the exploration of the design and application spectrum of reconfigurable computing, exploited as application specific accelerators for multi-processors systems on chip. More specifically, it introduces a reconfigurable digital signal processor featuring a heterogeneous set of reconfigurable engines, and a homogeneous multi-core system, exploiting three different flavours of reconfigurable and mask-programmable technologies as implementation platform for applications specific accelerators. In this work, the various trade-offs concerning the utilization multi-core platforms and the different configuration technologies are explored, characterizing the design space of the proposed approach in terms of programmability, performance, energy efficiency and manufacturing costs.
Resumo:
The continuous advancements and enhancements of wireless systems are enabling new compelling scenarios where mobile services can adapt according to the current execution context, represented by the computational resources available at the local device, current physical location, people in physical proximity, and so forth. Such services called context-aware require the timely delivery of all relevant information describing the current context, and that introduces several unsolved complexities, spanning from low-level context data transmission up to context data storage and replication into the mobile system. In addition, to ensure correct and scalable context provisioning, it is crucial to integrate and interoperate with different wireless technologies (WiFi, Bluetooth, etc.) and modes (infrastructure-based and ad-hoc), and to use decentralized solutions to store and replicate context data on mobile devices. These challenges call for novel middleware solutions, here called Context Data Distribution Infrastructures (CDDIs), capable of delivering relevant context data to mobile devices, while hiding all the issues introduced by data distribution in heterogeneous and large-scale mobile settings. This dissertation thoroughly analyzes CDDIs for mobile systems, with the main goal of achieving a holistic approach to the design of such type of middleware solutions. We discuss the main functions needed by context data distribution in large mobile systems, and we claim the precise definition and clean respect of quality-based contracts between context consumers and CDDI to reconfigure main middleware components at runtime. We present the design and the implementation of our proposals, both in simulation-based and in real-world scenarios, along with an extensive evaluation that confirms the technical soundness of proposed CDDI solutions. Finally, we consider three highly heterogeneous scenarios, namely disaster areas, smart campuses, and smart cities, to better remark the wide technical validity of our analysis and solutions under different network deployments and quality constraints.