930 resultados para Lab-On-A-Chip Devices
Resumo:
Production of citric acid from crude glycerol from biodiesel industry, in batch cultures of Yarrowia lipolytica W29 was performed in a lab-scale stirred tank bioreactor in order to assess the effect of oxygen mass transfer rate in this bioprocess. An empirical correlation was proposed to describe oxygen volumetric mass transfer coefficient (kLa) as a function of operating conditions (stirring speed and specific air flow rate) and cellular density. kLa increased according with a power function with specific power input and superficial gas velocity, and slightly decreased with cellular density. The increase of initial kLa from 7 h-1 to 55 h-1 led to 7.8-fold increase of citric acid final concentration. Experiments were also performed at controlled dissolved oxygen (DO) and citric acid concentration increased with DO up to 60% of saturation. Thus, due to the simpler operation setting an optimal kLa than at controlled DO, it can be concluded that kLa is an adequate parameter for the optimization of citric acid production from crude glycerol by Y. lipolytica and to be considered in bioprocess scale-up. Our empirical correlation, considering the operating conditions and cellular density, will be a valid tool for this purpose.
Resumo:
Tese de Doutoramento em Engenharia Eletrónica e Computadores.
Resumo:
Dissertação de mestrado em Optometria Avançada
Resumo:
PhD in Sciences Specialty in Physics
Resumo:
OBJECTIVE: To determine technical procedures and criteria used by Brazilian physicians for measuring blood pressure and diagnosing hypertension. METHODS: A questionnaire with 5 questions about practices and behaviors regarding blood pressure measurement and the diagnosis of hypertension was sent to 25,606 physicians in all Brazilian regions through a mailing list. The responses were compared with the recommendations of a specific consensus and descriptive analysis. RESULTS: Of the 3,621 (14.1%) responses obtained, 57% were from the southeastern region of Brazil. The following items were reported: use of an aneroid device by 67.8%; use of a mercury column device by 14.6%; 11.9% of the participants never calibrated the devices; 35.7% calibrated the devices at intervals < 1 year; 85.8% measured blood pressure in 100% of the medical visits; 86.9% measured blood pressure more than once and on more than one occasion. For hypertension diagnosis, 55.7% considered the patient's age, and only 1/3 relied on consensus statements. CONCLUSION: Despite the adequate frequency of both practices, it was far from that expected, and some contradictions between the diagnostic criterion for hypertension and the number of blood pressure measurements were found. The results suggest that, to include the great majority of the medical professionals, disclosure of consensus statements and techniques for blood pressure measurement should go beyond the boundaries of medical events and specialized journals.
Resumo:
El avance en la potencia de cómputo en nuestros días viene dado por la paralelización del procesamiento, dadas las características que disponen las nuevas arquitecturas de hardware. Utilizar convenientemente este hardware impacta en la aceleración de los algoritmos en ejecución (programas). Sin embargo, convertir de forma adecuada el algoritmo en su forma paralela es complejo, y a su vez, esta forma, es específica para cada tipo de hardware paralelo. En la actualidad los procesadores de uso general más comunes son los multicore, procesadores paralelos, también denominados Symmetric Multi-Processors (SMP). Hoy en día es difícil hallar un procesador para computadoras de escritorio que no tengan algún tipo de paralelismo del caracterizado por los SMP, siendo la tendencia de desarrollo, que cada día nos encontremos con procesadores con mayor numero de cores disponibles. Por otro lado, los dispositivos de procesamiento de video (Graphics Processor Units - GPU), a su vez, han ido desarrollando su potencia de cómputo por medio de disponer de múltiples unidades de procesamiento dentro de su composición electrónica, a tal punto que en la actualidad no es difícil encontrar placas de GPU con capacidad de 200 a 400 hilos de procesamiento paralelo. Estos procesadores son muy veloces y específicos para la tarea que fueron desarrollados, principalmente el procesamiento de video. Sin embargo, como este tipo de procesadores tiene muchos puntos en común con el procesamiento científico, estos dispositivos han ido reorientándose con el nombre de General Processing Graphics Processor Unit (GPGPU). A diferencia de los procesadores SMP señalados anteriormente, las GPGPU no son de propósito general y tienen sus complicaciones para uso general debido al límite en la cantidad de memoria que cada placa puede disponer y al tipo de procesamiento paralelo que debe realizar para poder ser productiva su utilización. Los dispositivos de lógica programable, FPGA, son dispositivos capaces de realizar grandes cantidades de operaciones en paralelo, por lo que pueden ser usados para la implementación de algoritmos específicos, aprovechando el paralelismo que estas ofrecen. Su inconveniente viene derivado de la complejidad para la programación y el testing del algoritmo instanciado en el dispositivo. Ante esta diversidad de procesadores paralelos, el objetivo de nuestro trabajo está enfocado en analizar las características especificas que cada uno de estos tienen, y su impacto en la estructura de los algoritmos para que su utilización pueda obtener rendimientos de procesamiento acordes al número de recursos utilizados y combinarlos de forma tal que su complementación sea benéfica. Específicamente, partiendo desde las características del hardware, determinar las propiedades que el algoritmo paralelo debe tener para poder ser acelerado. Las características de los algoritmos paralelos determinará a su vez cuál de estos nuevos tipos de hardware son los mas adecuados para su instanciación. En particular serán tenidos en cuenta el nivel de dependencia de datos, la necesidad de realizar sincronizaciones durante el procesamiento paralelo, el tamaño de datos a procesar y la complejidad de la programación paralela en cada tipo de hardware. Today´s advances in high-performance computing are driven by parallel processing capabilities of available hardware architectures. These architectures enable the acceleration of algorithms when thes ealgorithms are properly parallelized and exploit the specific processing power of the underneath architecture. Most current processors are targeted for general pruposes and integrate several processor cores on a single chip, resulting in what is known as a Symmetric Multiprocessing (SMP) unit. Nowadays even desktop computers make use of multicore processors. Meanwhile, the industry trend is to increase the number of integrated rocessor cores as technology matures. On the other hand, Graphics Processor Units (GPU), originally designed to handle only video processing, have emerged as interesting alternatives to implement algorithm acceleration. Current available GPUs are able to implement from 200 to 400 threads for parallel processing. Scientific computing can be implemented in these hardware thanks to the programability of new GPUs that have been denoted as General Processing Graphics Processor Units (GPGPU).However, GPGPU offer little memory with respect to that available for general-prupose processors; thus, the implementation of algorithms need to be addressed carefully. Finally, Field Programmable Gate Arrays (FPGA) are programmable devices which can implement hardware logic with low latency, high parallelism and deep pipelines. Thes devices can be used to implement specific algorithms that need to run at very high speeds. However, their programmability is harder that software approaches and debugging is typically time-consuming. In this context where several alternatives for speeding up algorithms are available, our work aims at determining the main features of thes architectures and developing the required know-how to accelerate algorithm execution on them. We look at identifying those algorithms that may fit better on a given architecture as well as compleme
Resumo:
The adoption of a sustainable approach to meeting the energy needs of society has recently taken on a more central and urgent place in the minds of many people. There are many reasons for this including ecological, environmental and economic concerns. One particular area where a sustainable approach has become very relevant is in the production of electricity. The contribution of renewable sources to the energy mix supplying the electricity grid is nothing new, but the focus has begun to move away from the more conventional renewable sources such as wind and hydro. The necessity of exploring new and innovative sources of renewable energy is now seen as imperative as the older forms (i.e. hydro) reach the saturation point of their possible exploitation. One such innovative source of energy currently beginning to be utilised in this regard is tidal energy. The purpose of this thesis is to isolate one specific drawback to tidal energy, which could be considered a roadblock to this energy source being a major contributor to the Irish national grid. This drawback presents itself in the inconsistent nature in which a tidal device generates energy over the course of a 24 hour period. This inconsistency of supply can result in the cycling of conventional power plants in order to even out the supply, subsequently leading to additional costs. The thesis includes a review of literature relevant to the area of tidal and other marine energy sources with an emphasis on the state of the art devices currently in development or production. The research carried out included tidal data analysis and manipulation into a model of the power generating potential at specific sites. A solution is then proposed to the drawback of inconsistency of supply, which involves the positioning of various tidal generation installations at specifically selected locations around the Irish coast. The temporal shift achieved in the power supply profiles of the individual sites by locating the installations in the correct locations, successfully produced an overall power supply profile with the smoother curve and a consistent base load energy supply. Some limitations to the method employed were also outlined, and suggestions for further improvements to the method were made.
Resumo:
The purpose of this study was to evaluate the determinism of the AS-lnterface network and the 3 main families of control systems, which may use it, namely PLC, PC and RTOS. During the course of this study the PROFIBUS and Ethernet field level networks were also considered in order to ensure that they would not introduce unacceptable latencies into the overall control system. This research demonstrated that an incorrectly configured Ethernet network introduces unacceptable variable duration latencies into the control system, thus care must be exercised if the determinism of a control system is not to be compromised. This study introduces a new concept of using statistics and process capability metrics in the form of CPk values, to specify how suitable a control system is for a given control task. The PLC systems, which were tested, demonstrated extremely deterministic responses, but when a large number of iterations were introduced in the user program, the mean control system latency was much too great for an AS-I network. Thus the PLC was found to be unsuitable for an AS-I network if a large, complex user program Is required. The PC systems, which were tested were non-deterministic and had latencies of variable duration. These latencies became extremely exaggerated when a graphing ActiveX was included in the control application. These PC systems also exhibited a non-normal frequency distribution of control system latencies, and as such are unsuitable for implementation with an AS-I network. The RTOS system, which was tested, overcame the problems identified with the PLC systems and produced an extremely deterministic response, even when a large number of iterations were introduced in the user program. The RTOS system, which was tested, is capable of providing a suitable deterministic control system response, even when an extremely large, complex user program is required.
Resumo:
Multi-core processors is a design philosophy that has become mainstream in scientific and engineering applications. Increasing performance and gate capacity of recent FPGA devices has permitted complex logic systems to be implemented on a single programmable device. By using VHDL here we present an implementation of one multi-core processor by using the PLASMA IP core based on the (most) MIPS I ISA and give an overview of the processor architecture and share theexecution results.
Resumo:
In IP networks, most of packets, that have been dropped, are recovered after the expiration of retransmission timeouts. These can result in unnecessary retransmissions and needless reduction of congestion window. An inappropriate retransmission timeout has a huge impact on TCP performance. In this paper we have proved that CSMA/CA mechanism can cause TCP retransmissions due to CSMA/CA effects. For this we have observed three wireless connections that use CSMA/CA: with good link quality, poor link quality and in presence of cross traffic. The measurements have been performed using real devices. Through tracking of each transmitted packet it is possible to analyze the relation between one-way delay and packet loss probability and the cumulative distribution of distances between peaks of OWDs. The distribution of OWDs and the distances between peaks of OWDs are the most important parameters of tuning TCP retransmission timeout on CSMA/CA networks. A new perspective through investigating the dynamical relation between one-way delay and packet loss ratio depending on the link quality to enhance the TCP performance has been provided.
Resumo:
(1) In the period 1965/77 fertilizer consumption in Brazil increased nearly fifteen foild from circa 200,000 tons of N + P2O5 + K2O to 3 million tons. During the fifteen years extending from 1950 to 1964 usage of the primary macronutrients was raised by a factor of 2 only. (2) Several explanations are given for the remarkable increase, namely: an experimental background which supplied data for recommendations of rates, time and type of application; a convenient governmental policy for minimum prices and rural credit; capacity of the industry to meet the demand of the fertilizer market; an adequate mechanism for the diffusion of the practice of fertilizer use to the farmer. (3) The extension work, which has caused a permanent change in the aptitude towards fertilization, was carried out in the traditional way by salesmen supported by a technical staff, as well as by agronomists of the official services. (4) Two new programs were started and conducted in a rather short time, both putting emphasis on the relatively new technology of fertilizer use. (5) The first program, conducted in the Southern part of the country, extended lab and green house work supplemented by a few field trials to small land owners - the so called "operação tatú" (operation armadillo). (6) The seconde program, covering a larger problem area in the Northeast and in Central Brazil, began directly in field as thousands of demonstrations and simple experiments with the participation of local people whose involvement was essential for the success of the initiative; in this case the official extension services, both foreign and national sources of funds, and universities did participate under the leadership of the Brazilian Association for the Diffusion of Fertilizers (ANDA). (7) It is felt that the Brazilian experience gained thereof could be useful to other countries under similar conditions.
Resumo:
Lean meat percentage (LMP) is the criterion for carcass classification and it must be measured on line objectively. The aim of this work was to compare the error of the prediction (RMSEP) of the LMP measured with the following different devices: Fat-O-Meat’er (FOM), UltraFOM (UFOM), AUTOFOM and -VCS2000. For this reason the same 99 carcasses were measured using all 4 apparatus and dissected according to the European Reference Method. Moreover a subsample of the carcasses (n=77) were fully scanned with a X-ray Computed Tomography equipment (CT). The RMSEP calculated with cross validation leave-one-out was lower for FOM and AUTOFOM (1.8% and 1.9%, respectively) and higher for UFOM and VCS2000 (2.3% for both devices). The error obtained with CT was the lowest (0.96%) in accordance with previous results, but CT cannot be used on line. It can be concluded that FOM and AUTOFOM presented better accuracy than UFOM and VCS2000.
Resumo:
SUMMARY : Eukaryotic DNA interacts with the nuclear proteins using non-covalent ionic interactions. Proteins can recognize specific nucleotide sequences based on the sterical interactions with the DNA and these specific protein-DNA interactions are the basis for many nuclear processes, e.g. gene transcription, chromosomal replication, and recombination. New technology termed ChIP-Seq has been recently developed for the analysis of protein-DNA interactions on a whole genome scale and it is based on immunoprecipitation of chromatin and high-throughput DNA sequencing procedure. ChIP-Seq is a novel technique with a great potential to replace older techniques for mapping of protein-DNA interactions. In this thesis, we bring some new insights into the ChIP-Seq data analysis. First, we point out to some common and so far unknown artifacts of the method. Sequence tag distribution in the genome does not follow uniform distribution and we have found extreme hot-spots of tag accumulation over specific loci in the human and mouse genomes. These artifactual sequence tags accumulations will create false peaks in every ChIP-Seq dataset and we propose different filtering methods to reduce the number of false positives. Next, we propose random sampling as a powerful analytical tool in the ChIP-Seq data analysis that could be used to infer biological knowledge from the massive ChIP-Seq datasets. We created unbiased random sampling algorithm and we used this methodology to reveal some of the important biological properties of Nuclear Factor I DNA binding proteins. Finally, by analyzing the ChIP-Seq data in detail, we revealed that Nuclear Factor I transcription factors mainly act as activators of transcription, and that they are associated with specific chromatin modifications that are markers of open chromatin. We speculate that NFI factors only interact with the DNA wrapped around the nucleosome. We also found multiple loci that indicate possible chromatin barrier activity of NFI proteins, which could suggest the use of NFI binding sequences as chromatin insulators in biotechnology applications. RESUME : L'ADN des eucaryotes interagit avec les protéines nucléaires par des interactions noncovalentes ioniques. Les protéines peuvent reconnaître les séquences nucléotidiques spécifiques basées sur l'interaction stérique avec l'ADN, et des interactions spécifiques contrôlent de nombreux processus nucléaire, p.ex. transcription du gène, la réplication chromosomique, et la recombinaison. Une nouvelle technologie appelée ChIP-Seq a été récemment développée pour l'analyse des interactions protéine-ADN à l'échelle du génome entier et cette approche est basée sur l'immuno-précipitation de la chromatine et sur la procédure de séquençage de l'ADN à haut débit. La nouvelle approche ChIP-Seq a donc un fort potentiel pour remplacer les anciennes techniques de cartographie des interactions protéine-ADN. Dans cette thèse, nous apportons de nouvelles perspectives dans l'analyse des données ChIP-Seq. Tout d'abord, nous avons identifié des artefacts très communs associés à cette méthode qui étaient jusqu'à présent insoupçonnés. La distribution des séquences dans le génome ne suit pas une distribution uniforme et nous avons constaté des positions extrêmes d'accumulation de séquence à des régions spécifiques, des génomes humains et de la souris. Ces accumulations des séquences artéfactuelles créera de faux pics dans toutes les données ChIP-Seq, et nous proposons différentes méthodes de filtrage pour réduire le nombre de faux positifs. Ensuite, nous proposons un nouvel échantillonnage aléatoire comme un outil puissant d'analyse des données ChIP-Seq, ce qui pourraient augmenter l'acquisition de connaissances biologiques à partir des données ChIP-Seq. Nous avons créé un algorithme d'échantillonnage aléatoire et nous avons utilisé cette méthode pour révéler certaines des propriétés biologiques importantes de protéines liant à l'ADN nommés Facteur Nucléaire I (NFI). Enfin, en analysant en détail les données de ChIP-Seq pour la famille de facteurs de transcription nommés Facteur Nucléaire I, nous avons révélé que ces protéines agissent principalement comme des activateurs de transcription, et qu'elles sont associées à des modifications de la chromatine spécifiques qui sont des marqueurs de la chromatine ouverte. Nous pensons que lés facteurs NFI interagir uniquement avec l'ADN enroulé autour du nucléosome. Nous avons également constaté plusieurs régions génomiques qui indiquent une éventuelle activité de barrière chromatinienne des protéines NFI, ce qui pourrait suggérer l'utilisation de séquences de liaison NFI comme séquences isolatrices dans des applications de la biotechnologie.
Resumo:
OBJECTIVE: To reach a consensus on the clinical use of ambulatory blood pressure monitoring (ABPM). METHODS: A task force on the clinical use of ABPM wrote this overview in preparation for the Seventh International Consensus Conference (23-25 September 1999, Leuven, Belgium). This article was amended to account for opinions aired at the conference and to reflect the common ground reached in the discussions. POINTS OF CONSENSUS: The Riva Rocci/Korotkoff technique, although it is prone to error, is easy and cheap to perform and remains worldwide the standard procedure for measuring blood pressure. ABPM should be performed only with properly validated devices as an accessory to conventional measurement of blood pressure. Ambulatory recording of blood pressure requires considerable investment in equipment and training and its use for screening purposes cannot be recommended. ABPM is most useful for identifying patients with white-coat hypertension (WCH), also known as isolated clinic hypertension, which is arbitrarily defined as a clinic blood pressure of more than 140 mmHg systolic or 90 mmHg diastolic in a patient with daytime ambulatory blood pressure below 135 mmHg systolic and 85 mmHg diastolic. Some experts consider a daytime blood pressure below 130 mmHg systolic and 80 mmHg diastolic optimal. Whether WCH predisposes subjects to sustained hypertension remains debated. However, outcome is better correlated to the ambulatory blood pressure than it is to the conventional blood pressure. Antihypertensive drugs lower the clinic blood pressure in patients with WCH but not the ambulatory blood pressure, and also do not improve prognosis. Nevertheless, WCH should not be left unattended. If no previous cardiovascular complications are present, treatment could be limited to follow-up and hygienic measures, which should also account for risk factors other than hypertension. ABPM is superior to conventional measurement of blood pressure not only for selecting patients for antihypertensive drug treatment but also for assessing the effects both of non-pharmacological and of pharmacological therapy. The ambulatory blood pressure should be reduced by treatment to below the thresholds applied for diagnosing sustained hypertension. ABPM makes the diagnosis and treatment of nocturnal hypertension possible and is especially indicated for patients with borderline hypertension, the elderly, pregnant women, patients with treatment-resistant hypertension and patients with symptoms suggestive of hypotension. In centres with sufficient financial resources, ABPM could become part of the routine assessment of patients with clinic hypertension. For patients with WCH, it should be repeated at annual or 6-monthly intervals. Variation of blood pressure throughout the day can be monitored only by ABPM, but several advantages of the latter technique can also be obtained by self-measurement of blood pressure, a less expensive method that is probably better suited to primary practice and use in developing countries. CONCLUSIONS: ABPM or equivalent methods for tracing the white-coat effect should become part of the routine diagnostic and therapeutic procedures applied to treated and untreated patients with elevated clinic blood pressures. Results of long-term outcome trials should better establish the advantage of further integrating ABPM as an accessory to conventional sphygmomanometry into the routine care of hypertensive patients and should provide more definite information on the long-term cost-effectiveness. Because such trials are not likely to be funded by the pharmaceutical industry, governments and health insurance companies should take responsibility in this regard.
Resumo:
Ventricular assist devices (VADs) are used in treatment for terminal heart failure or as a bridge to transplantation. We created biVAD using the artificial muscles (AMs) that supports both ventricles at the same time. We developed the test bench (TB) as the in vitro evaluating system to enable the measurement of performance. The biVAD exerts different pressure between left and right ventricle like the heart physiologically does. The heart model based on child's heart was constructed in silicone. This model was fitted with the biVAD. Two pipettes containing water with an ultrasonic sensor placed on top of each and attached to ventricles reproduced the preload and the after load of each ventricle by the real-time measurement of the fluid height variation proportionally to the exerted pressure. The LabVIEW software extrapolated the displaced volume and the pressure generated by each side of our biVAD. The development of a standardized protocol permitted the validation of the TB for in vitro evaluation, measurement of the performances of the AM biVAD herein, and reproducibility of data.