949 resultados para Sparse arrays
Resumo:
Our objective was to validate a new device dedicated to measure the light disturbances surrounding bright sources of light under different sources of potential variability. Twenty subjects were involved in the study. Light distortion was measured using an experimental prototype (light distortion analyzer, CEORLab, University of Minho, Portugal) comprising twenty-four LED arrays panel at 2 m. Sources of variability included: intrasession and intersession repeated measures, pupil size (3 versus 6 mm), defocus (þ0.50) correction for the working distance, angular resolution (15 deg versus 30 deg), temporal stimuli presentation, and pupil size. Size, shape, location, and irregularity parameters have been obtained. At a low speed of presentation of the stimuli, changes in angular resolution did not have an effect on the results of the parameters measured. Results did not change with pupil size. Intensity of the central glare source significantly influenced the outcomes. Examination time was reduced by 30% when a 30 deg angular resolution was explored instead of 15 deg. Measurements were fast and repeatable under the same experimental conditions. Size and shape parameters showed the highest consistency, whereas location and irregularity parameters showed lower consistency. The system was sensitive to changes in the intensity of the central glare source but not to pupil changes in this sample of healthy subjects.
Resumo:
Alzheimer's disease (AD) is commonly associated with marked memory deficits; however, nonamnestic variants have been consistently described as well. Posterior cortical atrophy (PCA) is a progressive degenerative condition in which posterior regions of the brain are predominantly affected, therefore resulting in a pattern of distinctive and marked visuospatial symptoms, such as apraxia, alexia, and spatial neglect. Despite the growing number of studies on cognitive and neural bases of the visual variant of AD, intervention studies remain relatively sparse. Current pharmacological treatments offer modest efficacy. Also, there is a scarcity of complementary nonpharmacological interventions with only two previous studies of PCA. Here we describe a highly educated 57-year-old patient diagnosed with a visual variant of AD who participated in a cognitive intervention program (comprising reality orientation, cognitive stimulation, and cognitive training exercises). Neuropsychological assessment was performed across moments (baseline, postintervention, follow-up) and consisted mainly of verbal and visual memory. Baseline neuropsychological assessment showed deficits in perceptive and visual-constructive abilities, learning and memory, and temporal orientation. After neuropsychological rehabilitation, we observed small improvements in the patient's cognitive functioning, namely in verbal memory, attention, and psychomotor abilities. This study shows evidence of small beneficial effects of cognitive intervention in PCA and is the first report of this approach with a highly educated patient in a moderate stage of the disease. Controlled studies are needed to assess the potential efficacy of cognition-focused approaches in these patients, and, if relevant, to grant their availability as a complementary therapy to pharmacological treatment and visual aids.
Resumo:
Dissertação de mestrado integrado em Engenharia Biomédica (área de especialização em Eletrónica Médica)
Resumo:
Olive oil quality grading is traditionally assessed by human sensory evaluation of positive and negative attributes (olfactory, gustatory, and final olfactorygustatory sensations). However, it is not guaranteed that trained panelist can correctly classify monovarietal extra-virgin olive oils according to olive cultivar. In this work, the potential application of human (sensory panelists) and artificial (electronic tongue) sensory evaluation of olive oils was studied aiming to discriminate eight single-cultivar extra-virgin olive oils. Linear discriminant, partial least square discriminant, and sparse partial least square discriminant analyses were evaluated. The best predictive classification was obtained using linear discriminant analysis with simulated annealing selection algorithm. A low-level data fusion approach (18 electronic tongue signals and nine sensory attributes) enabled 100 % leave-one-out cross-validation correct classification, improving the discrimination capability of the individual use of sensor profiles or sensory attributes (70 and 57 % leave-one-out correct classifications, respectively). So, human sensory evaluation and electronic tongue analysis may be used as complementary tools allowing successful monovarietal olive oil discrimination.
Resumo:
Dissertação de mestrado Internacional em Sustentabilidade do Ambiente Construído
Resumo:
We propose a novel hanging spherical drop system for anchoring arrays of droplets of cell suspension based on the use of biomimetic superhydrophobic flat substrates, with controlled positional adhesion and minimum contact with a solid substrate. By facing down the platform, it was possible to generate independent spheroid bodies in a high throughput manner, in order to mimic in vivo tumour models on the lab-on-chip scale. To validate this system for drug screening purposes, the toxicity of the anti-cancer drug doxorubicin in cell spheroids was tested and compared to cells in 2D culture. The advantages presented by this platform, such as feasibility of the system and the ability to control the size uniformity of the spheroid, emphasize its potential to be used as a new low cost toolbox for high-throughput drug screening and in cell or tissue engineering.
Resumo:
"Tissue engineering: part A", vol. 21, suppl. 1 (2015)
Resumo:
El avance en la potencia de cómputo en nuestros días viene dado por la paralelización del procesamiento, dadas las características que disponen las nuevas arquitecturas de hardware. Utilizar convenientemente este hardware impacta en la aceleración de los algoritmos en ejecución (programas). Sin embargo, convertir de forma adecuada el algoritmo en su forma paralela es complejo, y a su vez, esta forma, es específica para cada tipo de hardware paralelo. En la actualidad los procesadores de uso general más comunes son los multicore, procesadores paralelos, también denominados Symmetric Multi-Processors (SMP). Hoy en día es difícil hallar un procesador para computadoras de escritorio que no tengan algún tipo de paralelismo del caracterizado por los SMP, siendo la tendencia de desarrollo, que cada día nos encontremos con procesadores con mayor numero de cores disponibles. Por otro lado, los dispositivos de procesamiento de video (Graphics Processor Units - GPU), a su vez, han ido desarrollando su potencia de cómputo por medio de disponer de múltiples unidades de procesamiento dentro de su composición electrónica, a tal punto que en la actualidad no es difícil encontrar placas de GPU con capacidad de 200 a 400 hilos de procesamiento paralelo. Estos procesadores son muy veloces y específicos para la tarea que fueron desarrollados, principalmente el procesamiento de video. Sin embargo, como este tipo de procesadores tiene muchos puntos en común con el procesamiento científico, estos dispositivos han ido reorientándose con el nombre de General Processing Graphics Processor Unit (GPGPU). A diferencia de los procesadores SMP señalados anteriormente, las GPGPU no son de propósito general y tienen sus complicaciones para uso general debido al límite en la cantidad de memoria que cada placa puede disponer y al tipo de procesamiento paralelo que debe realizar para poder ser productiva su utilización. Los dispositivos de lógica programable, FPGA, son dispositivos capaces de realizar grandes cantidades de operaciones en paralelo, por lo que pueden ser usados para la implementación de algoritmos específicos, aprovechando el paralelismo que estas ofrecen. Su inconveniente viene derivado de la complejidad para la programación y el testing del algoritmo instanciado en el dispositivo. Ante esta diversidad de procesadores paralelos, el objetivo de nuestro trabajo está enfocado en analizar las características especificas que cada uno de estos tienen, y su impacto en la estructura de los algoritmos para que su utilización pueda obtener rendimientos de procesamiento acordes al número de recursos utilizados y combinarlos de forma tal que su complementación sea benéfica. Específicamente, partiendo desde las características del hardware, determinar las propiedades que el algoritmo paralelo debe tener para poder ser acelerado. Las características de los algoritmos paralelos determinará a su vez cuál de estos nuevos tipos de hardware son los mas adecuados para su instanciación. En particular serán tenidos en cuenta el nivel de dependencia de datos, la necesidad de realizar sincronizaciones durante el procesamiento paralelo, el tamaño de datos a procesar y la complejidad de la programación paralela en cada tipo de hardware. Today´s advances in high-performance computing are driven by parallel processing capabilities of available hardware architectures. These architectures enable the acceleration of algorithms when thes ealgorithms are properly parallelized and exploit the specific processing power of the underneath architecture. Most current processors are targeted for general pruposes and integrate several processor cores on a single chip, resulting in what is known as a Symmetric Multiprocessing (SMP) unit. Nowadays even desktop computers make use of multicore processors. Meanwhile, the industry trend is to increase the number of integrated rocessor cores as technology matures. On the other hand, Graphics Processor Units (GPU), originally designed to handle only video processing, have emerged as interesting alternatives to implement algorithm acceleration. Current available GPUs are able to implement from 200 to 400 threads for parallel processing. Scientific computing can be implemented in these hardware thanks to the programability of new GPUs that have been denoted as General Processing Graphics Processor Units (GPGPU).However, GPGPU offer little memory with respect to that available for general-prupose processors; thus, the implementation of algorithms need to be addressed carefully. Finally, Field Programmable Gate Arrays (FPGA) are programmable devices which can implement hardware logic with low latency, high parallelism and deep pipelines. Thes devices can be used to implement specific algorithms that need to run at very high speeds. However, their programmability is harder that software approaches and debugging is typically time-consuming. In this context where several alternatives for speeding up algorithms are available, our work aims at determining the main features of thes architectures and developing the required know-how to accelerate algorithm execution on them. We look at identifying those algorithms that may fit better on a given architecture as well as compleme
Resumo:
A composting Heat Extraction Unit (HEU) was designed to utilise waste heat from decaying organic matter for a variety of heating application The aim was to construct an insulated small scale, sealed, organic matter filled container. In this vessel a process fluid within embedded pipes would absorb thermal energy from the hot compost and transport it to an external heat exchanger. Experiments were conducted on the constituent parts and the final design comprised of a 2046 litre container insulated with polyurethane foam and kingspan with two arrays of qualpex piping embedded in the compost to extract heat. The thermal energy was used in horticultural trials by heating polytunnels using a radiator system during a winter/spring period. The compost derived energy was compared with conventional and renewable energy in the form of an electric fan heater and solar panel. The compost derived energy was able to raise polytunnel temperatures to 2-3°C above the control, with the solar panel contributing no thermal energy during the winter trial and the electric heater the most efficient maintaining temperature at its preset temperature of 10°C. Plants that were cultivated as performance indicators showed no significant difference in growth rates between the heat sources. A follow on experiment conducted using special growing mats for distributing compost thermal energy directly under the plants (Radish, Cabbage, Spinach and Lettuce) displayed more successful growth patterns than those in the control. The compost HEU was also used for more traditional space heating and hot water heating applications. A test space was successfully heated over two trials with varying insulation levels. Maximum internal temperature increases of 7°C and 13°C were recorded for building U-values of 1.6 and 0.53 W/m2K respectively using the HEU. The HEU successfully heated a 60 litre hot water cylinder for 32 days with maximum water temperature increases of 36.5°C recorded. Total energy recovered from the 435 Kg of compost within the HEU during the polytunnel growth trial was 76 kWh which is 3 kWh/day for the 25 days when the HEU was activated. With a mean coefficient of performance level of 6.8 calculated for the HEU the technology is energy efficient. Therefore the compost HEU developed here could be a useful renewable energy technology particularly for small scale rural dwellers and growers with access to significant quantities of organic matter
Resumo:
Liquid separation efficiency, liquid penetration, modeling, arrays of temperature, distribution, fluidized bed, two-phase-nozzle
Resumo:
Object-oriented simulation, mechatronic systems, non-iterative algorithm, electric components, piezo-actuator, symbolic computation, Maple, Sparse-Tableau, Library of components
Resumo:
Speaker Recognition, Speaker Verification, Sparse Kernel Logistic Regression, Support Vector Machine
Resumo:
Abstract Background: There are sparse data on the performance of different types of drug-eluting stents (DES) in acute and real-life setting. Objective: The aim of the study was to compare the safety and efficacy of first- versus second-generation DES in patients with acute coronary syndromes (ACS). Methods: This all-comer registry enrolled consecutive patients diagnosed with ACS and treated with percutaneous coronary intervention with the implantation of first- or second-generation DES in one-year follow-up. The primary efficacy endpoint was defined as major adverse cardiac and cerebrovascular event (MACCE), a composite of all-cause death, nonfatal myocardial infarction, target-vessel revascularization and stroke. The primary safety outcome was definite stent thrombosis (ST) at one year. Results: From the total of 1916 patients enrolled into the registry, 1328 patients were diagnosed with ACS. Of them, 426 were treated with first- and 902 with second-generation DES. There was no significant difference in the incidence of MACCE between two types of DES at one year. The rate of acute and subacute ST was higher in first- vs. second-generation DES (1.6% vs. 0.1%, p < 0.001, and 1.2% vs. 0.2%, p = 0.025, respectively), but there was no difference regarding late ST (0.7% vs. 0.2%, respectively, p = 0.18) and gastrointestinal bleeding (2.1% vs. 1.1%, p = 0.21). In Cox regression, first-generation DES was an independent predictor for cumulative ST (HR 3.29 [1.30-8.31], p = 0.01). Conclusions: In an all-comer registry of ACS, the one-year rate of MACCE was comparable in groups treated with first- and second-generation DES. The use of first-generation DES was associated with higher rates of acute and subacute ST and was an independent predictor of cumulative ST.
Resumo:
BACKGROUND: Only a few studies have explored the relation between coffee and tea intake and head and neck cancers, with inconsistent results. METHODS: We pooled individual-level data from nine case-control studies of head and neck cancers, including 5,139 cases and 9,028 controls. Logistic regression was used to estimate odds ratios (OR) and 95% confidence intervals (95% CI), adjusting for potential confounders. RESULTS: Caffeinated coffee intake was inversely related with the risk of cancer of the oral cavity and pharynx: the ORs were 0.96 (95% CI, 0.94-0.98) for an increment of 1 cup per day and 0.61 (95% CI, 0.47-0.80) in drinkers of >4 cups per day versus nondrinkers. This latter estimate was consistent for different anatomic sites (OR, 0.46; 95% CI, 0.30-0.71 for oral cavity; OR, 0.58; 95% CI, 0.41-0.82 for oropharynx/hypopharynx; and OR, 0.61; 95% CI, 0.37-1.01 for oral cavity/pharynx not otherwise specified) and across strata of selected covariates. No association of caffeinated coffee drinking was found with laryngeal cancer (OR, 0.96; 95% CI, 0.64-1.45 in drinkers of >4 cups per day versus nondrinkers). Data on decaffeinated coffee were too sparse for detailed analysis, but indicated no increased risk. Tea intake was not associated with head and neck cancer risk (OR, 0.99; 95% CI, 0.89-1.11 for drinkers versus nondrinkers). CONCLUSIONS: This pooled analysis of case-control studies supports the hypothesis of an inverse association between caffeinated coffee drinking and risk of cancer of the oral cavity and pharynx. IMPACT: Given widespread use of coffee and the relatively high incidence and low survival of head and neck cancers, the observed inverse association may have appreciable public health relevance.
Resumo:
Several studies have reported high levels of inflammatory biomarkers in hypertension, but data coming from the general population are sparse, and sex differences have been little explored. The CoLaus Study is a cross-sectional examination survey in a random sample of 6067 Caucasians aged 35-75 years in Lausanne, Switzerland. Blood pressure (BP) was assessed using a validated oscillometric device. Anthropometric parameters were also measured, including body composition, using electrical bioimpedance. Crude serum levels of interleukin-6 (IL-6), tumor necrosis factor α (TNF-α) and ultrasensitive C-reactive protein (hsCRP) were positively and IL-1β (IL-1β) negatively (P<0.001 for all values), associated with BP. For IL-6, IL-1β and TNF-α, the association disappeared in multivariable analysis, largely explained by differences in age and body mass index, in particular fat mass. On the contrary, hsCRP remained independently and positively associated with systolic (β (95% confidence interval): 1.15 (0.64; 1.65); P<0.001) and diastolic (0.75 (0.42; 1.08); P<0.001) BP. Relationships of hsCRP, IL-6 and TNF-α with BP tended to be stronger in women than in men, partly related to the difference in fat mass, yet the interaction between sex and IL-6 persisted after correction for all tested confounders. In the general population, the associations between inflammatory biomarkers and rising levels of BP are mainly driven by age and fat mass. The stronger associations in women suggest that sex differences might exist in the complex interplay between BP and inflammation.