906 resultados para open system
Resumo:
Abstract Background Accurate malaria diagnosis is mandatory for the treatment and management of severe cases. Moreover, individuals with asymptomatic malaria are not usually screened by health care facilities, which further complicates disease control efforts. The present study compared the performances of a malaria rapid diagnosis test (RDT), the thick blood smear method and nested PCR for the diagnosis of symptomatic malaria in the Brazilian Amazon. In addition, an innovative computational approach was tested for the diagnosis of asymptomatic malaria. Methods The study was divided in two parts. For the first part, passive case detection was performed in 311 individuals with malaria-related symptoms from a recently urbanized community in the Brazilian Amazon. A cross-sectional investigation compared the diagnostic performance of the RDT Optimal-IT, nested PCR and light microscopy. The second part of the study involved active case detection of asymptomatic malaria in 380 individuals from riverine communities in Rondônia, Brazil. The performances of microscopy, nested PCR and an expert computational system based on artificial neural networks (MalDANN) using epidemiological data were compared. Results Nested PCR was shown to be the gold standard for diagnosis of both symptomatic and asymptomatic malaria because it detected the major number of cases and presented the maximum specificity. Surprisingly, the RDT was superior to microscopy in the diagnosis of cases with low parasitaemia. Nevertheless, RDT could not discriminate the Plasmodium species in 12 cases of mixed infections (Plasmodium vivax + Plasmodium falciparum). Moreover, the microscopy presented low performance in the detection of asymptomatic cases (61.25% of correct diagnoses). The MalDANN system using epidemiological data was worse that the light microscopy (56% of correct diagnoses). However, when information regarding plasma levels of interleukin-10 and interferon-gamma were inputted, the MalDANN performance sensibly increased (80% correct diagnoses). Conclusions An RDT for malaria diagnosis may find a promising use in the Brazilian Amazon integrating a rational diagnostic approach. Despite the low performance of the MalDANN test using solely epidemiological data, an approach based on neural networks may be feasible in cases where simpler methods for discriminating individuals below and above threshold cytokine levels are available.
Resumo:
Abstract Background The authors have developed a small portable device for the objective measurement of the transparency of corneas stored in preservative medium, for use by eye banks in evaluation prior to transplantation. Methods The optical system consists of a white light, lenses, and pinholes that collimate the white light beams and illuminate the cornea in its preservative medium, and an optical filter (400–700 nm) that selects the range of the wavelength of interest. A sensor detects the light that passes through the cornea, and the average corneal transparency is displayed. In order to obtain only the tissue transparency, an electronic circuit was built to detect a baseline input of the preservative medium prior to the measurement of corneal transparency. The operation of the system involves three steps: adjusting the "0 %" transmittance of the instrument, determining the "100 %" transmittance of the system, and finally measuring the transparency of the preserved cornea inside the storage medium. Results Fifty selected corneas were evaluated. Each cornea was submitted to three evaluation methods: subjective classification of transparency through a slit lamp, quantification of the transmittance of light using a corneal spectrophotometer previously developed, and measurement of transparency with the portable device. Conclusion By comparing the three methods and using the expertise of eye bank trained personnel, a table for quantifying corneal transparency with the new device has been developed. The correlation factor between the corneal spectrophotometer and the new device is 0,99813, leading to a system that is able to standardize transparency measurements of preserved corneas, which is currently done subjectively.
Resumo:
Public health strategies to reduce cardiovascular morbidity and mortality should focus on global cardiometabolic risk reduction. The efficacy of lifestyle changes to prevent type 2 diabetes have been demonstrated, but low-cost interventions to reduce cardiometabolic risk in Latin-America have been rarely reported. Our group developed 2 programs to promote health of high-risk individuals attending a primary care center in Brazil. This study compared the effects of two 9-month lifestyle interventions, one based on medical consultations (traditional) and another with 13 multi-professional group sessions in addition to the medical consultations (intensive) on cardiometabolic parameters. Adults were eligible if they had pre-diabetes (according to the American Diabetes Association) and/or metabolic syndrome (International Diabetes Federation criteria for Latin-America). Data were expressed as means and standard deviations or percentages and compared between groups or testing visits. A p-value < 0.05 was considered significant. Results: 180 individuals agreed to participate (35.0% men, mean age 54.7 ± 12.3 years, 86.1% overweight or obese). 83 were allocated to the traditional and 97 to the intensive program. Both interventions reduced body mass index, waist circumference and tumor necrosis factor-α. Only intensive program reduced 2-hour plasma glucose and blood pressure and increased adiponectin values, but HDL-cholesterol increased only in the traditional. Also, responses to programs were better in intensive compared to traditional program in terms of blood pressure and adiponectin improvements. No new case of diabetes in intensive but 3 cases and one myocardial infarction in traditional program were detected. Both programs induced metabolic improvement in the short-term, but if better results in the intensive are due to higher awareness about risk and self-motivation deserves further investigation. In conclusion, these low-cost interventions are able to minimize cardiometabolic risk factors involved in the progression to type 2 diabetes and/or cardiovascular disease.
Resumo:
Background: In epidemiological surveys, a good reliability among the examiners regarding the caries detection method is essential. However, training and calibrating those examiners is an arduous task because it involves several patients who are examined many times. To facilitate this step, we aimed to propose a laboratory methodology to simulate the examinations performed to detect caries lesions using the International Caries Detection and Assessment System (ICDAS) in epidemiological surveys. Methods: A benchmark examiner conducted all training sessions. A total of 67 exfoliated primary teeth, varying from sound to extensive cavitated, were set in seven arch models to simulate complete mouths in primary dentition. Sixteen examiners (graduate students) evaluated all surfaces of the teeth under illumination using buccal mirrors and ball-ended probe in two occasions, using only coronal primary caries scores of the ICDAS. As reference standard, two different examiners assessed the proximal surfaces by direct visual inspection, classifying them in sound, with non-cavitated or with cavitated lesions. After, teeth were sectioned in the bucco-lingual direction, and the examiners assessed the sections in stereomicroscope, classifying the occlusal and smooth surfaces according to lesion depth. Inter-examiner reproducibility was evaluated using weighted kappa. Sensitivities and specificities were calculated at two thresholds: all lesions and advanced lesions (cavitated lesions in proximal surfaces and lesions reaching the dentine in occlusal and smooth surfaces). Conclusion: The methodology purposed for training and calibration of several examiners designated for epidemiological surveys of dental caries in preschool children using the ICDAS is feasible, permitting the assessment of reliability and accuracy of the examiners previously to the survey´s development.
Resumo:
Abstract Introduction Pelvicalyceal cysts are common findings in autopsies and can manifest with a variety of patterns. These cystic lesions are usually a benign entity with no clinical significance unless they enlarge enough to cause compression of the adjacent collecting system and consequently obstructive uropathy. Few cases of the spontaneous rupture of pelvicalyceal renal cysts have been published and to the best of our knowledge there is no report of a combined rupture to collector system and retroperitoneal space documented during a multiphase computed tomography. Case presentation We report a case of a ‘real-time’ spontaneous rupture of a pelvicalyceal cyst into the collecting system with fistulization into the retroperitoneum. The patient was a 78-year-old Caucasian man with a previous history of renal stones and a large pelvicalyceal renal cyst who was admitted to our Emergency department with acute right flank pain. A multiphase computed tomography was performed and the pre-contrast images demonstrated a right pelvicalyceal renal cyst measuring 12.0 × 6.1cm in the lower pole causing moderate dilation of the upper right renal collection system. In addition, a partially obstructive stone on the left distal ureter with mild left hydronephrosis was noted. The nephrographic phase did not add any new information. The excretory phase (10-minute delay) demonstrated a spontaneous rupture of the cyst into the pelvicalyceal system with posterior fistulization into the retroperitoneal space. Conclusion In this case study we present time-related changes of a rare pelvicalyceal cyst complication, which to the best of our knowledge has fortunately not been previously documented. Analysis of the sequential images and comparison with an earlier scan allowed us to better understand the physiopathological process of the rupture, the clinical presentation and to elaborate hypotheses for its etiopathogenesis.
Resumo:
Remanufacturing is the process of rebuilding used products that ensures that the quality of remanufactured products is equivalent to that of new ones. Although the theme is gaining ground, it is still little explored due to lack of knowledge, the difficulty of visualizing it systemically, and implementing it effectively. Few models treat remanufacturing as a system. Most of the studies still treated remanufacturing as an isolated process, preventing it from being seen in an integrated manner. Therefore, the aim of this work is to organize the knowledge about remanufacturing, offering a vision of remanufacturing system and contributing to an integrated view about the theme. The methodology employed was a literature review, adopting the General Theory of Systems to characterize the remanufacturing system. This work consolidates and organizes the elements of this system, enabling a better understanding of remanufacturing and assisting companies in adopting the concept.
Resumo:
The quantification of ammonia (NH3) losses from sugarcane straw fertilized with urea can be performed with collectors that recover the NH3 in acid-treated absorbers. Thus, the use of an open NH3 collector with a polytetrafluoroethylene (PTFE)-wrapped absorber is an interesting option since its cost is low, handling easy and microclimatic conditions irrelevant. The aim of this study was to evaluate the efficiency of an open collector for quantifying NH3-N volatilized from urea applied over the sugarcane straw. The experiment was carried out in a sugarcane field located near Piracicaba, São Paulo, Brazil. The NH3-N losses were estimated using a semi-open static collector calibrated with 15N (reference method) and an open collector with an absorber wrapped in PTFE film. Urea was applied to the soil surface in treatments corresponding to rates of 50, 100, 150 and 200 kg ha-1 N. Applying urea-N fertilizer on sugarcane straw resulted in losses NH3-N up to 24 % of the applied rate. The amount of volatile NH3-N measured in the open and the semi-open static collector did not differ. The effectiveness of the collection system varied non-linearly, with an average value of 58.4 % for the range of 100 to 200 kg ha-1 of urea-N. The open collector showed significant potential for use; however, further research is needed to verify the suitability of the proposed method.
Resumo:
This study aims to develop and implement a tool called intelligent tutoring system in an online course to help a formative evaluation in order to improve student learning. According to Bloom et al. (1971,117) formative evaluation is a systematic evaluation to improve the process of teaching and learning. The intelligent tutoring system may provide a timely and high quality feedback that not only informs the correctness of the solution to the problem, but also informs students about the accuracy of the response relative to their current knowledge about the solution. Constructive and supportive feedback should be given to students to reveal the right and wrong answers immediately after taking the test. Feedback about the right answers is a form to reinforce positive behaviors. Identifying possible errors and relating them to the instructional material may help student to strengthen the content under consideration. The remedial suggestion should be given in each answer with detaileddescription with regards the materials and instructional procedures before taking next step. The main idea is to inform students about what they have learned and what they still have to learn. The open-source LMS called Moodle was extended to accomplish the formative evaluation, high-quality feedback, and the communal knowledge presented here with a short online financial math course that is being offered at a large University in Brazil. The preliminary results shows that the intelligent tutoring system using high quality feedback helped students to improve their knowledge about the solution to the problems based on the errors of their past cohorts. The results and suggestion for future work are presented and discussed.
Resumo:
The continental margin off SE South America hosts one of the world’s most energetic hydrodynamic regimes but also the second largest drainage system of the continent. Both, the ocean current system as well as the fluvial runoff are strongly controlled by the atmospheric circulation modes over the region. The distribution pattern of particular types of sediments on shelf and slope and the long-term built-up of depositional elements within the overall margin architecture are, thus, the product of both, seasonal to millennial variability as well as long-term environmental trends. This talk presents how the combination of different methodological approaches can be used to obtain a comprehensive picture of the variability of a shelf and upper-slope hydrodynamic system during Holocene times. The particular methods applied are: (a) Margin-wide stratigraphic information to elucidate the role of sea level for the oceanographic and sedimentary systems since the last glacial maximum; (b) Palaeoceanographic sediment proxies combined with palaeo-temperature indicating isotopes of bivalve shells to trace lateral shifts in the coastal oceanography (particularly of the shelf front) during the Holocene; (c) Neodymium isotopes to identify the shelf sediment transport routes resulting from the current regime; (d) Sedimentological/geochemical data to show the efficient mechanism of sand export from the shelf to the open ocean; (e) Diatom assemblages and sediment element distributions indicating palaeo-salinity and the changing marine influence to illustrate the Plata runoff history. Sea level has not only controlled the overall configuration of the shelf but also the position of the main sediment routes from the continent towards the ocean. The shelf front has shifted frequently since the last glacial times probably resulting from both, changes in the Westerly Winds intensity and in the shelf width itself. Remarkable is a southward shift of this front during the past two centuries possibly related to anthropogenic influences on the atmosphere. The oceanographic regime with its prominent hydrographic boundaries led to a clear separation of sedimentary provinces since shelf drowning. It is especially the shelf front which enhances shelf sediment export through a continuous high sand supply to the uppermost slope. Finally, the Plata River does not continuously provide sediment to the shelf but shows significant climate-related changes in discharge during the past centuries. Starting from these findings, three major fields of research should, in general, be further developed in future: (i) The immediate interaction of the hydrodynamic and sedimentary systems to close the gaps between deposit information and modern oceanographic dynamics; (ii) Material budget calculations for the marginal ocean system in terms of material fluxes, storage/retention capacities, and critical thresholds; (iii) The role of human activity on the atmospheric, oceanographic and solid material systems to unravel natural vs. anthropogenic effects and feedback mechanisms
Resumo:
[EN]This paper describes a wildfi re forecasting application based on a 3D virtual environment and a fi re simulation engine. A novel open source framework is presented for the development of 3D graphics applications over large geographic areas, off ering high performance 3D visualization and powerful interaction tools for the Geographic Information Systems (GIS) community. The application includes a remote module that allows simultaneous connection of several users for monitoring a real wildfi re event.
Resumo:
In the past decade, the advent of efficient genome sequencing tools and high-throughput experimental biotechnology has lead to enormous progress in the life science. Among the most important innovations is the microarray tecnology. It allows to quantify the expression for thousands of genes simultaneously by measurin the hybridization from a tissue of interest to probes on a small glass or plastic slide. The characteristics of these data include a fair amount of random noise, a predictor dimension in the thousand, and a sample noise in the dozens. One of the most exciting areas to which microarray technology has been applied is the challenge of deciphering complex disease such as cancer. In these studies, samples are taken from two or more groups of individuals with heterogeneous phenotypes, pathologies, or clinical outcomes. these samples are hybridized to microarrays in an effort to find a small number of genes which are strongly correlated with the group of individuals. Eventhough today methods to analyse the data are welle developed and close to reach a standard organization (through the effort of preposed International project like Microarray Gene Expression Data -MGED- Society [1]) it is not unfrequant to stumble in a clinician's question that do not have a compelling statistical method that could permit to answer it.The contribution of this dissertation in deciphering disease regards the development of new approaches aiming at handle open problems posed by clinicians in handle specific experimental designs. In Chapter 1 starting from a biological necessary introduction, we revise the microarray tecnologies and all the important steps that involve an experiment from the production of the array, to the quality controls ending with preprocessing steps that will be used into the data analysis in the rest of the dissertation. While in Chapter 2 a critical review of standard analysis methods are provided stressing most of problems that In Chapter 3 is introduced a method to adress the issue of unbalanced design of miacroarray experiments. In microarray experiments, experimental design is a crucial starting-point for obtaining reasonable results. In a two-class problem, an equal or similar number of samples it should be collected between the two classes. However in some cases, e.g. rare pathologies, the approach to be taken is less evident. We propose to address this issue by applying a modified version of SAM [2]. MultiSAM consists in a reiterated application of a SAM analysis, comparing the less populated class (LPC) with 1,000 random samplings of the same size from the more populated class (MPC) A list of the differentially expressed genes is generated for each SAM application. After 1,000 reiterations, each single probe given a "score" ranging from 0 to 1,000 based on its recurrence in the 1,000 lists as differentially expressed. The performance of MultiSAM was compared to the performance of SAM and LIMMA [3] over two simulated data sets via beta and exponential distribution. The results of all three algorithms over low- noise data sets seems acceptable However, on a real unbalanced two-channel data set reagardin Chronic Lymphocitic Leukemia, LIMMA finds no significant probe, SAM finds 23 significantly changed probes but cannot separate the two classes, while MultiSAM finds 122 probes with score >300 and separates the data into two clusters by hierarchical clustering. We also report extra-assay validation in terms of differentially expressed genes Although standard algorithms perform well over low-noise simulated data sets, multi-SAM seems to be the only one able to reveal subtle differences in gene expression profiles on real unbalanced data. In Chapter 4 a method to adress similarities evaluation in a three-class prblem by means of Relevance Vector Machine [4] is described. In fact, looking at microarray data in a prognostic and diagnostic clinical framework, not only differences could have a crucial role. In some cases similarities can give useful and, sometimes even more, important information. The goal, given three classes, could be to establish, with a certain level of confidence, if the third one is similar to the first or the second one. In this work we show that Relevance Vector Machine (RVM) [2] could be a possible solutions to the limitation of standard supervised classification. In fact, RVM offers many advantages compared, for example, with his well-known precursor (Support Vector Machine - SVM [3]). Among these advantages, the estimate of posterior probability of class membership represents a key feature to address the similarity issue. This is a highly important, but often overlooked, option of any practical pattern recognition system. We focused on Tumor-Grade-three-class problem, so we have 67 samples of grade I (G1), 54 samples of grade 3 (G3) and 100 samples of grade 2 (G2). The goal is to find a model able to separate G1 from G3, then evaluate the third class G2 as test-set to obtain the probability for samples of G2 to be member of class G1 or class G3. The analysis showed that breast cancer samples of grade II have a molecular profile more similar to breast cancer samples of grade I. Looking at the literature this result have been guessed, but no measure of significance was gived before.
Resumo:
Providing support for multimedia applications on low-power mobile devices remains a significant research challenge. This is primarily due to two reasons: • Portable mobile devices have modest sizes and weights, and therefore inadequate resources, low CPU processing power, reduced display capabilities, limited memory and battery lifetimes as compared to desktop and laptop systems. • On the other hand, multimedia applications tend to have distinctive QoS and processing requirementswhichmake themextremely resource-demanding. This innate conflict introduces key research challenges in the design of multimedia applications and device-level power optimization. Energy efficiency in this kind of platforms can be achieved only via a synergistic hardware and software approach. In fact, while System-on-Chips are more and more programmable thus providing functional flexibility, hardwareonly power reduction techniques cannot maintain consumption under acceptable bounds. It is well understood both in research and industry that system configuration andmanagement cannot be controlled efficiently only relying on low-level firmware and hardware drivers. In fact, at this level there is lack of information about user application activity and consequently about the impact of power management decision on QoS. Even though operating system support and integration is a requirement for effective performance and energy management, more effective and QoSsensitive power management is possible if power awareness and hardware configuration control strategies are tightly integratedwith domain-specificmiddleware services. The main objective of this PhD research has been the exploration and the integration of amiddleware-centric energymanagement with applications and operating-system. We choose to focus on the CPU-memory and the video subsystems, since they are the most power-hungry components of an embedded system. A second main objective has been the definition and implementation of software facilities (like toolkits, API, and run-time engines) in order to improve programmability and performance efficiency of such platforms. Enhancing energy efficiency and programmability ofmodernMulti-Processor System-on-Chips (MPSoCs) Consumer applications are characterized by tight time-to-market constraints and extreme cost sensitivity. The software that runs on modern embedded systems must be high performance, real time, and even more important low power. Although much progress has been made on these problems, much remains to be done. Multi-processor System-on-Chip (MPSoC) are increasingly popular platforms for high performance embedded applications. This leads to interesting challenges in software development since efficient software development is a major issue for MPSoc designers. An important step in deploying applications on multiprocessors is to allocate and schedule concurrent tasks to the processing and communication resources of the platform. The problem of allocating and scheduling precedenceconstrained tasks on processors in a distributed real-time system is NP-hard. There is a clear need for deployment technology that addresses thesemulti processing issues. This problem can be tackled by means of specific middleware which takes care of allocating and scheduling tasks on the different processing elements and which tries also to optimize the power consumption of the entire multiprocessor platform. This dissertation is an attempt to develop insight into efficient, flexible and optimalmethods for allocating and scheduling concurrent applications tomultiprocessor architectures. It is a well-known problem in literature: this kind of optimization problems are very complex even in much simplified variants, therefore most authors propose simplified models and heuristic approaches to solve it in reasonable time. Model simplification is often achieved by abstracting away platform implementation ”details”. As a result, optimization problems become more tractable, even reaching polynomial time complexity. Unfortunately, this approach creates an abstraction gap between the optimization model and the real HW-SW platform. The main issue with heuristic or, more in general, with incomplete search is that they introduce an optimality gap of unknown size. They provide very limited or no information on the distance between the best computed solution and the optimal one. The goal of this work is to address both abstraction and optimality gaps, formulating accurate models which accounts for a number of ”non-idealities” in real-life hardware platforms, developing novel mapping algorithms that deterministically find optimal solutions, and implementing software infrastructures required by developers to deploy applications for the targetMPSoC platforms. Energy Efficient LCDBacklightAutoregulation on Real-LifeMultimediaAp- plication Processor Despite the ever increasing advances in Liquid Crystal Display’s (LCD) technology, their power consumption is still one of the major limitations to the battery life of mobile appliances such as smart phones, portable media players, gaming and navigation devices. There is a clear trend towards the increase of LCD size to exploit the multimedia capabilities of portable devices that can receive and render high definition video and pictures. Multimedia applications running on these devices require LCD screen sizes of 2.2 to 3.5 inches andmore to display video sequences and pictures with the required quality. LCD power consumption is dependent on the backlight and pixel matrix driving circuits and is typically proportional to the panel area. As a result, the contribution is also likely to be considerable in future mobile appliances. To address this issue, companies are proposing low power technologies suitable for mobile applications supporting low power states and image control techniques. On the research side, several power saving schemes and algorithms can be found in literature. Some of them exploit software-only techniques to change the image content to reduce the power associated with the crystal polarization, some others are aimed at decreasing the backlight level while compensating the luminance reduction by compensating the user perceived quality degradation using pixel-by-pixel image processing algorithms. The major limitation of these techniques is that they rely on the CPU to perform pixel-based manipulations and their impact on CPU utilization and power consumption has not been assessed. This PhDdissertation shows an alternative approach that exploits in a smart and efficient way the hardware image processing unit almost integrated in every current multimedia application processors to implement a hardware assisted image compensation that allows dynamic scaling of the backlight with a negligible impact on QoS. The proposed approach overcomes CPU-intensive techniques by saving system power without requiring either a dedicated display technology or hardware modification. Thesis Overview The remainder of the thesis is organized as follows. The first part is focused on enhancing energy efficiency and programmability of modern Multi-Processor System-on-Chips (MPSoCs). Chapter 2 gives an overview about architectural trends in embedded systems, illustrating the principal features of new technologies and the key challenges still open. Chapter 3 presents a QoS-driven methodology for optimal allocation and frequency selection for MPSoCs. The methodology is based on functional simulation and full system power estimation. Chapter 4 targets allocation and scheduling of pipelined stream-oriented applications on top of distributed memory architectures with messaging support. We tackled the complexity of the problem by means of decomposition and no-good generation, and prove the increased computational efficiency of this approach with respect to traditional ones. Chapter 5 presents a cooperative framework to solve the allocation, scheduling and voltage/frequency selection problem to optimality for energyefficient MPSoCs, while in Chapter 6 applications with conditional task graph are taken into account. Finally Chapter 7 proposes a complete framework, called Cellflow, to help programmers in efficient software implementation on a real architecture, the Cell Broadband Engine processor. The second part is focused on energy efficient software techniques for LCD displays. Chapter 8 gives an overview about portable device display technologies, illustrating the principal features of LCD video systems and the key challenges still open. Chapter 9 shows several energy efficient software techniques present in literature, while Chapter 10 illustrates in details our method for saving significant power in an LCD panel. Finally, conclusions are drawn, reporting the main research contributions that have been discussed throughout this dissertation.