947 resultados para Low-Power Image Sensors
Resumo:
Long-term electrocardiogram (ECG) signals might suffer from relevant baseline disturbances during physical activity. Motion artifacts in particular are more pronounced with dry surface or esophageal electrodes which are dedicated to prolonged ECG recording. In this paper we present a method called baseline wander tracking (BWT) that tracks and rejects strong baseline disturbances and avoids concurrent saturation of the analog front-end. The proposed algorithm shifts the baseline level of the ECG signal to the middle of the dynamic input range. Due to the fast offset shifts, that produce much steeper signal portions than the normal ECG waves, the true ECG signal can be reconstructed offline and filtered using computationally intensive algorithms. Based on Monte Carlo simulations we observed reconstruction errors mainly caused by the non-linearity inaccuracies of the DAC. However, the signal to error ratio of the BWT is higher compared to an analog front-end featuring a dynamic input ranges above 15 mV if a synthetic ECG signal was used. The BWT is additionally able to suppress (electrode) offset potentials without introducing long transients. Due to its structural simplicity, memory efficiency and the DC coupling capability, the BWT is dedicated to high integration required in long-term and low-power ECG recording systems.
Resumo:
Living at high altitude is one of the most difficult challenges that humans had to cope with during their evolution. Whereas several genomic studies have revealed some of the genetic bases of adaptations in Tibetan, Andean, and Ethiopian populations, relatively little evidence of convergent evolution to altitude in different continents has accumulated. This lack of evidence can be due to truly different evolutionary responses, but it can also be due to the low power of former studies that have mainly focused on populations from a single geographical region or performed separate analyses on multiple pairs of populations to avoid problems linked to shared histories between some populations. We introduce here a hierarchical Bayesian method to detect local adaptation that can deal with complex demographic histories. Our method can identify selection occurring at different scales, as well as convergent adaptation in different regions. We apply our approach to the analysis of a large SNP data set from low- and high-altitude human populations from America and Asia. The simultaneous analysis of these two geographic areas allows us to identify several candidate genome regions for altitudinal selection, and we show that convergent evolution among continents has been quite common. In addition to identifying several genes and biological processes involved in high-altitude adaptation, we identify two specific biological pathways that could have evolved in both continents to counter toxic effects induced by hypoxia.
Resumo:
Genetic anticipation is defined as a decrease in age of onset or increase in severity as the disorder is transmitted through subsequent generations. Anticipation has been noted in the literature for over a century. Recently, anticipation in several diseases including Huntington's Disease, Myotonic Dystrophy and Fragile X Syndrome were shown to be caused by expansion of triplet repeats. Anticipation effects have also been observed in numerous mental disorders (e.g. Schizophrenia, Bipolar Disorder), cancers (Li-Fraumeni Syndrome, Leukemia) and other complex diseases. ^ Several statistical methods have been applied to determine whether anticipation is a true phenomenon in a particular disorder, including standard statistical tests and newly developed affected parent/affected child pair methods. These methods have been shown to be inappropriate for assessing anticipation for a variety of reasons, including familial correlation and low power. Therefore, we have developed family-based likelihood modeling approaches to model the underlying transmission of the disease gene and penetrance function and hence detect anticipation. These methods can be applied in extended families, thus improving the power to detect anticipation compared with existing methods based only upon parents and children. The first method we have proposed is based on the regressive logistic hazard model. This approach models anticipation by a generational covariate. The second method allows alleles to mutate as they are transmitted from parents to offspring and is appropriate for modeling the known triplet repeat diseases in which the disease alleles can become more deleterious as they are transmitted across generations. ^ To evaluate the new methods, we performed extensive simulation studies for data simulated under different conditions to evaluate the effectiveness of the algorithms to detect genetic anticipation. Results from analysis by the first method yielded empirical power greater than 87% based on the 5% type I error critical value identified in each simulation depending on the method of data generation and current age criteria. Analysis by the second method was not possible due to the current formulation of the software. The application of this method to Huntington's Disease and Li-Fraumeni Syndrome data sets revealed evidence for a generation effect in both cases. ^
Resumo:
Next-generation DNA sequencing platforms can effectively detect the entire spectrum of genomic variation and is emerging to be a major tool for systematic exploration of the universe of variants and interactions in the entire genome. However, the data produced by next-generation sequencing technologies will suffer from three basic problems: sequence errors, assembly errors, and missing data. Current statistical methods for genetic analysis are well suited for detecting the association of common variants, but are less suitable to rare variants. This raises great challenge for sequence-based genetic studies of complex diseases.^ This research dissertation utilized genome continuum model as a general principle, and stochastic calculus and functional data analysis as tools for developing novel and powerful statistical methods for next generation of association studies of both qualitative and quantitative traits in the context of sequencing data, which finally lead to shifting the paradigm of association analysis from the current locus-by-locus analysis to collectively analyzing genome regions.^ In this project, the functional principal component (FPC) methods coupled with high-dimensional data reduction techniques will be used to develop novel and powerful methods for testing the associations of the entire spectrum of genetic variation within a segment of genome or a gene regardless of whether the variants are common or rare.^ The classical quantitative genetics suffer from high type I error rates and low power for rare variants. To overcome these limitations for resequencing data, this project used functional linear models with scalar response to develop statistics for identifying quantitative trait loci (QTLs) for both common and rare variants. To illustrate their applications, the functional linear models were applied to five quantitative traits in Framingham heart studies. ^ This project proposed a novel concept of gene-gene co-association in which a gene or a genomic region is taken as a unit of association analysis and used stochastic calculus to develop a unified framework for testing the association of multiple genes or genomic regions for both common and rare alleles. The proposed methods were applied to gene-gene co-association analysis of psoriasis in two independent GWAS datasets which led to discovery of networks significantly associated with psoriasis.^
Resumo:
The performance of the Hosmer-Lemeshow global goodness-of-fit statistic for logistic regression models was explored in a wide variety of conditions not previously fully investigated. Computer simulations, each consisting of 500 regression models, were run to assess the statistic in 23 different situations. The items which varied among the situations included the number of observations used in each regression, the number of covariates, the degree of dependence among the covariates, the combinations of continuous and discrete variables, and the generation of the values of the dependent variable for model fit or lack of fit.^ The study found that the $\rm\ C$g* statistic was adequate in tests of significance for most situations. However, when testing data which deviate from a logistic model, the statistic has low power to detect such deviation. Although grouping of the estimated probabilities into quantiles from 8 to 30 was studied, the deciles of risk approach was generally sufficient. Subdividing the estimated probabilities into more than 10 quantiles when there are many covariates in the model is not necessary, despite theoretical reasons which suggest otherwise. Because it does not follow a X$\sp2$ distribution, the statistic is not recommended for use in models containing only categorical variables with a limited number of covariate patterns.^ The statistic performed adequately when there were at least 10 observations per quantile. Large numbers of observations per quantile did not lead to incorrect conclusions that the model did not fit the data when it actually did. However, the statistic failed to detect lack of fit when it existed and should be supplemented with further tests for the influence of individual observations. Careful examination of the parameter estimates is also essential since the statistic did not perform as desired when there was moderate to severe collinearity among covariates.^ Two methods studied for handling tied values of the estimated probabilities made only a slight difference in conclusions about model fit. Neither method split observations with identical probabilities into different quantiles. Approaches which create equal size groups by separating ties should be avoided. ^
Resumo:
Global change leads to a multitude of simultaneous modifications in the marine realm among which shoaling of the upper mixed layer, leading to enhanced surface layer light intensities, as well as increased carbon dioxide (CO2) concentration are some of the most critical environmental alterations for phytoplankton. In this study, we investigated the responses of growth, photosynthetic carbon fixation and calcification of the coccolithophore Gephyrocapsa oceanica to elevated inline image (51 Pa, 105 Pa, and 152 Pa) (1 Pa ~ 10 µatm) at a variety of light intensities (50-800 µmol photons/m**2/s). By fitting the light response curve, our results showed that rising inline image reduced the maximum rates for growth, photosynthetic carbon fixation and calcification. Increasing light intensity enhanced the sensitivity of these rate responses to inline image, and shifted the inline image optima toward lower levels. Combining the results of this and a previous study (Sett et al. 2014) on the same strain indicates that both limiting low inline image and inhibiting high inline image levels (this study) induce similar responses, reducing growth, carbon fixation and calcification rates of G. oceanica. At limiting low light intensities the inline image optima for maximum growth, carbon fixation and calcification are shifted toward higher levels. Interacting effects of simultaneously occurring environmental changes, such as increasing light intensity and ocean acidification, need to be considered when trying to assess metabolic rates of marine phytoplankton under future ocean scenarios.
Resumo:
This work describes the probabilistic modelling af a Bayesian-based mechanism to improve location estimates of an already deployed location system by fusing its outputs with low-cost binary sensors. This mechanism takes advantege of the localization captabilities of different technologies usually present in smart environments deployments. The performance of the proposed algorithm over a real sensor deployment is evaluated using simulated and real experimental data.
Resumo:
We describe a compact lightweight impulse radar for radio-echo sounding of subsurface structures designed specifically for glaciological applications. The radar operates at frequencies between 10 and 75 MHz. Its main advantages are that it has a high signal-to-noise ratio and a corresponding wide dynamic range of 132 dB due mainly to its ability to perform real-time stacking (up to 4096 traces) as well as to the high transmitted power (peak voltage 2800 V). The maximum recording time window, 40 ?s at 100 MHz sampling frequency, results in possible radar returns from as deep as 3300 m. It is a versatile radar, suitable for different geophysical measurements (common-offset profiling, common midpoint, transillumination, etc.) and for different profiling set-ups, such as a snowmobile and sledge convoy or carried in a backpack and operated by a single person. Its low power consumption (6.6 W for the transmitter and 7.5 W for the receiver) allows the system to operate under battery power for mayor que7 hours with a total weight of menor que9 kg for all equipment, antennas and batteries.
Resumo:
Modern FPGAs with run-time reconfiguration allow the implementation of complex systems offering both the flexibility of software-based solutions combined with the performance of hardware. This combination of characteristics, together with the development of new specific methodologies, make feasible to reach new points of the system design space, and make embedded systems built on these platforms acquire more and more importance. However, the practical exploitation of this technique in fields that traditionally have relied on resource restricted embedded systems, is mainly limited by strict power consumption requirements, the cost and the high dependence of DPR techniques with the specific features of the device technology underneath. In this work, we tackle the previously reported problems, designing a reconfigurable platform based on the low-cost and low-power consuming Spartan-6 FPGA family. The full process to develop the platform will be detailed in the paper from scratch. In addition, the implementation of the reconfiguration mechanism, including two profiles, is reported. The first profile is a low-area and low-speed reconfiguration engine based mainly on software functions running on the embedded processor, while the other one is a hardware version of the same engine, implemented in the FPGA logic. This reconfiguration hardware block has been originally designed to the Virtex-5 family, and its porting process will be also described in this work, facing the interoperability problem among different families.
Resumo:
There are many the requirements that modern power converters should fulfill. Most of the applications where these converters are used, demand smaller converters with high efficiency, improved power density and a fast dynamic response. For instance, loads like microprocessors demand aggressive current steps with very high slew rates (100A/mus and higher); besides, during these load steps, the supply voltage of the microprocessor should be kept within tight limits in order to ensure its correct performance. The accomplishment of these requirements is not an easy task; complex solutions like advanced topologies - such as multiphase converters- as well as advanced control strategies are often needed. Besides, it is also necessary to operate the converter at high switching frequencies and to use capacitors with high capacitance and low ESR. Improving the dynamic response of power converters does not rely only on the control strategy but also the power topology should be suited to enable a fast dynamic response. Moreover, in later years, a fast dynamic response does not only mean accomplishing fast load steps but output voltage steps are gaining importance as well. At least, two applications that require fast voltage changes can be named: Low power microprocessors. In these devices, the voltage supply is changed according to the workload and the operating frequency of the microprocessor is changed at the same time. An important reduction in voltage dependent losses can be achieved with such changes. This technique is known as Dynamic Voltage Scaling (DVS). Another application where important energy savings can be achieved by means of changing the supply voltage are Radio Frequency Power Amplifiers. For example, RF architectures based on ‘Envelope Tracking’ and ‘Envelope Elimination and Restoration’ techniques can take advantage of voltage supply modulation and accomplish important energy savings in the power amplifier. However, in order to achieve these efficiency improvements, a power converter with high efficiency and high enough bandwidth (hundreds of kHz or even tens of MHz) is necessary in order to ensure an adequate supply voltage. The main objective of this Thesis is to improve the dynamic response of DC-DC converters from the point of view of the power topology. And the term dynamic response refers both to the load steps and the voltage steps; it is also interesting to modulate the output voltage of the converter with a specific bandwidth. In order to accomplish this, the question of what is it that limits the dynamic response of power converters should be answered. Analyzing this question leads to the conclusion that the dynamic response is limited by the power topology and specifically, by the filter inductance of the converter which is found in series between the input and the output of the converter. The series inductance is the one that determines the gain of the converter and provides the regulation capability. Although the energy stored in the filter inductance enables the regulation and the capability of filtering the output voltage, it imposes a limitation which is the concern of this Thesis. The series inductance stores energy and prevents the current from changing in a fast way, limiting the slew rate of the current through this inductor. Different solutions are proposed in the literature in order to reduce the limit imposed by the filter inductor. Many publications proposing new topologies and improvements to known topologies can be found in the literature. Also, complex control strategies are proposed with the objective of improving the dynamic response in power converters. In the proposed topologies, the energy stored in the series inductor is reduced; examples of these topologies are Multiphase converters, Buck converter operating at very high frequency or adding a low impedance path in parallel with the series inductance. Control techniques proposed in the literature, focus on adjusting the output voltage as fast as allowed by the power stage; examples of these control techniques are: hysteresis control, V 2 control, and minimum time control. In some of the proposed topologies, a reduction in the value of the series inductance is achieved and with this, the energy stored in this magnetic element is reduced; less stored energy means a faster dynamic response. However, in some cases (as in the high frequency Buck converter), the dynamic response is improved at the cost of worsening the efficiency. In this Thesis, a drastic solution is proposed: to completely eliminate the series inductance of the converter. This is a more radical solution when compared to those proposed in the literature. If the series inductance is eliminated, the regulation capability of the converter is limited which can make it difficult to use the topology in one-converter solutions; however, this topology is suitable for power architectures where the energy conversion is done by more than one converter. When the series inductor is eliminated from the converter, the current slew rate is no longer limited and it can be said that the dynamic response of the converter is independent from the switching frequency. This is the main advantage of eliminating the series inductor. The main objective, is to propose an energy conversion strategy that is done without series inductance. Without series inductance, no energy is stored between the input and the output of the converter and the dynamic response would be instantaneous if all the devices were ideal. If the energy transfer from the input to the output of the converter is done instantaneously when a load step occurs, conceptually it would not be necessary to store energy at the output of the converter (no output capacitor COUT would be needed) and if the input source is ideal, the input capacitor CIN would not be necessary. This last feature (no CIN with ideal VIN) is common to all power converters. However, when the concept is actually implemented, parasitic inductances such as leakage inductance of the transformer and the parasitic inductance of the PCB, cannot be avoided because they are inherent to the implementation of the converter. These parasitic elements do not affect significantly to the proposed concept. In this Thesis, it is proposed to operate the converter without series inductance in order to improve the dynamic response of the converter; however, on the other side, the continuous regulation capability of the converter is lost. It is said continuous because, as it will be explained throughout the Thesis, it is indeed possible to achieve discrete regulation; a converter without filter inductance and without energy stored in the magnetic element, is capable to achieve a limited number of output voltages. The changes between these output voltage levels are achieved in a fast way. The proposed energy conversion strategy is implemented by means of a multiphase converter where the coupling of the phases is done by discrete two-winding transformers instead of coupledinductors since transformers are, ideally, no energy storing elements. This idea is the main contribution of this Thesis. The feasibility of this energy conversion strategy is first analyzed and then verified by simulation and by the implementation of experimental prototypes. Once the strategy is proved valid, different options to implement the magnetic structure are analyzed. Three different discrete transformer arrangements are studied and implemented. A converter based on this energy conversion strategy would be designed with a different approach than the one used to design classic converters since an additional design degree of freedom is available. The switching frequency can be chosen according to the design specifications without penalizing the dynamic response or the efficiency. Low operating frequencies can be chosen in order to favor the efficiency; on the other hand, high operating frequencies (MHz) can be chosen in order to favor the size of the converter. For this reason, a particular design procedure is proposed for the ‘inductorless’ conversion strategy. Finally, applications where the features of the proposed conversion strategy (high efficiency with fast dynamic response) are advantageus, are proposed. For example, in two-stage power architectures where a high efficiency converter is needed as the first stage and there is a second stage that provides the fine regulation. Another example are RF power amplifiers where the voltage is modulated following an envelope reference in order to save power; in this application, a high efficiency converter, capable of achieving fast voltage steps is required. The main contributions of this Thesis are the following: The proposal of a conversion strategy that is done, ideally, without storing energy in the magnetic element. The validation and the implementation of the proposed energy conversion strategy. The study of different magnetic structures based on discrete transformers for the implementation of the proposed energy conversion strategy. To elaborate and validate a design procedure. To identify and validate applications for the proposed energy conversion strategy. It is important to remark that this work is done in collaboration with Intel. The particular features of the proposed conversion strategy enable the possibility of solving the problems related to microprocessor powering in a different way. For example, the high efficiency achieved with the proposed conversion strategy enables it as a good candidate to be used for power conditioning, as a first stage in a two-stage power architecture for powering microprocessors.
Resumo:
Objetivos : Analizar la distribución de energía en un tejido cuando se emplea terapia por láser de baja potencia y estudiar las especificaciones mínimas de equipos de terapia láser para estimar la dosis. Material y métodos: Se ha empleado el método de Monte Carlo para obtener la distribución de energía absorbida por la piel para dos tipos de láser y la teoría de la difusión para estimar la longitud de penetración y el recorrido libre medio. Se ha estudiado la variación de esa distribución en función de la raza (caucásico, asiático, afroamericano) y para dos localizaciones anatómicas distintas. Se ha analizado la información facilitada por diversos fabricantes de equipos comerciales para determinar si es necesario adaptar la dosimetría recomendada. Resultados: La radiación láser infrarroja (810nm) se absorbe mayoritariamente en un espesor de piel de 1,9±0,2mm para caucásicos, entre 1,73±0,08mm (volar del antebrazo) y 1,80±0,11mm (palma) para asiáticos y entre 1,25±0,09mm (volar del antebrazo) y 1,65±0,2mm (palma) para afroamericanos. El recorrido libre medio de la luz siempre es menor que 0,69±0,09mm. Para los equipos comerciales analizados la única característica geométrica del haz láser que se menciona es la superficie que oscila entre 0,08 y 1cm2, pero no se especifica cómo es la distribución de energía, la divergencia del haz, forma de la sección transversal, etc. Conclusiones:Dependiendo del equipo de terapia por láser de baja potencia utilizado, el tipo de paciente y la zona a tratar, el clínico debe adaptar las dosis recomendadas. Abstract: Objectives: To analyze the distribution of energy deposited in a tissue when this is irradiated with a low power laser and to study the minimum characteristics that manufacturers of low power laser therapy equipments should include to estimate the dosage. Material and methods: Monte Carlo simulation was performed to determine the absorption location of the laser energy. The diffusion theory was used to estimate penetration depth and mean free path. Variation of this distribution was studied based on three different skin types (Caucasians, Asians and Afroamericans) and for two different anatomic locations: palm and volar forearm. Information given by several manufactures of low power laser therapy equipments has been analyzed. Results: Infrared (810 nm) laser radiation is mainly absorbed in a skin layer of thickness 1.9±0.2mm for Caucasians, from 1.73±0.08mm (volar forearm) to 1.80±0.11mm (palm) for Asians, and from 1.25±0.09mm (volar forearm) to 1.65±0.2mm (palm) for Afroamericans. The light mean free path is lower than 0.69±0.09mm for all cases. The laser beam characteristics (beam shape, energy distribution on a transversal section, divergence, incidence angle,etc.) are not usually specified by the manufacturers. Only beam size (ranging from 0.08 to 1cm2) is given in some cases. Discussion and conclusions: Depending on the low power laser therapy equipment, on the patient and on the anatomic area to be treated, the staff should adapt the recommended dosage for each individual case.
Resumo:
Respiratory motion is a major source of reduced quality in positron emission tomography (PET). In order to minimize its effects, the use of respiratory synchronized acquisitions, leading to gated frames, has been suggested. Such frames, however, are of low signal-to-noise ratio (SNR) as they contain reduced statistics. Super-resolution (SR) techniques make use of the motion in a sequence of images in order to improve their quality. They aim at enhancing a low-resolution image belonging to a sequence of images representing different views of the same scene. In this work, a maximum a posteriori (MAP) super-resolution algorithm has been implemented and applied to respiratory gated PET images for motion compensation. An edge preserving Huber regularization term was used to ensure convergence. Motion fields were recovered using a B-spline based elastic registration algorithm. The performance of the SR algorithm was evaluated through the use of both simulated and clinical datasets by assessing image SNR, as well as the contrast, position and extent of the different lesions. Results were compared to summing the registered synchronized frames on both simulated and clinical datasets. The super-resolution image had higher SNR (by a factor of over 4 on average) and lesion contrast (by a factor of 2) than the single respiratory synchronized frame using the same reconstruction matrix size. In comparison to the motion corrected or the motion free images a similar SNR was obtained, while improvements of up to 20% in the recovered lesion size and contrast were measured. Finally, the recovered lesion locations on the SR images were systematically closer to the true simulated lesion positions. These observations concerning the SNR, lesion contrast and size were confirmed on two clinical datasets included in the study. In conclusion, the use of SR techniques applied to respiratory motion synchronized images lead to motion compensation combined with improved image SNR and contrast, without any increase in the overall acquisition times.
Resumo:
Wireless sensor networks are posed as the new communication paradigm where the use of small, low-complexity, and low-power devices is preferred over costly centralized systems. The spectra of potential applications of sensor networks is very wide, ranging from monitoring, surveillance, and localization, among others. Localization is a key application in sensor networks and the use of simple, efficient, and distributed algorithms is of paramount practical importance. Combining convex optimization tools with consensus algorithms we propose a distributed localization algorithm for scenarios where received signal strength indicator readings are used. We approach the localization problem by formulating an alternative problem that uses distance estimates locally computed at each node. The formulated problem is solved by a relaxed version using semidefinite relaxation technique. Conditions under which the relaxed problem yields to the same solution as the original problem are given and a distributed consensusbased implementation of the algorithm is proposed based on an augmented Lagrangian approach and primaldual decomposition methods. Although suboptimal, the proposed approach is very suitable for its implementation in real sensor networks, i.e., it is scalable, robust against node failures and requires only local communication among neighboring nodes. Simulation results show that running an additional local search around the found solution can yield performance close to the maximum likelihood estimate.
Resumo:
LEDs are substituting fluorescent and incandescent bulbs as illumination sources due to their low power consumption and long lifetime. Visible Light Communications (VLC) makes use of the LEDs short switching times to transmit information. Although LEDs switching speed is around Mbps range, higher speeds (hundred of Mbps) can be reached by using high bandwidth-efficiency modulation techniques. However, the use of these techniques requires a more complex driver which elevates drastically its power consumption. In this work an energy efficiency analysis of the different VLC modulation techniques and drivers is presented. Besides, the design of new schemes of VLC drivers is described.
Resumo:
n this paper, we present the design and implementation of a prototype system of Smart Parking Services based on Wireless Sensor Networks (WSNs) that allows vehicle drivers to effectively find the free parking places. The proposed scheme consists of wireless sensor networks, embedded web-server, central web-server and mobile phone application. In the system, low-cost wireless sensors networks modules are deployed into each parking slot equipped with one sensor node. The state of the parking slot is detected by sensor node and is reported periodically to embedded web-server via the deployed wireless sensor networks. This information is sent to central web-server using Wi-Fi networks in real-time, and also the vehicle driver can find vacant parking lots using standard mobile devices.