981 resultados para Orthogonal Arrays
Resumo:
We propose a novel hanging spherical drop system for anchoring arrays of droplets of cell suspension based on the use of biomimetic superhydrophobic flat substrates, with controlled positional adhesion and minimum contact with a solid substrate. By facing down the platform, it was possible to generate independent spheroid bodies in a high throughput manner, in order to mimic in vivo tumour models on the lab-on-chip scale. To validate this system for drug screening purposes, the toxicity of the anti-cancer drug doxorubicin in cell spheroids was tested and compared to cells in 2D culture. The advantages presented by this platform, such as feasibility of the system and the ability to control the size uniformity of the spheroid, emphasize its potential to be used as a new low cost toolbox for high-throughput drug screening and in cell or tissue engineering.
Resumo:
"Tissue engineering: part A", vol. 21, suppl. 1 (2015)
Resumo:
Here we focus on factor analysis from a best practices point of view, by investigating the factor structure of neuropsychological tests and using the results obtained to illustrate on choosing a reasonable solution. The sample (n=1051 individuals) was randomly divided into two groups: one for exploratory factor analysis (EFA) and principal component analysis (PCA), to investigate the number of factors underlying the neurocognitive variables; the second to test the "best fit" model via confirmatory factor analysis (CFA). For the exploratory step, three extraction (maximum likelihood, principal axis factoring and principal components) and two rotation (orthogonal and oblique) methods were used. The analysis methodology allowed exploring how different cognitive/psychological tests correlated/discriminated between dimensions, indicating that to capture latent structures in similar sample sizes and measures, with approximately normal data distribution, reflective models with oblimin rotation might prove the most adequate.
Resumo:
OBJECTIVE - Evaluation of the performance of the QRS voltage-duration product (VDP) for detection of left ventricular hypertrophy (LVH) in spontaneously hypertensive rats (SHR). METHODS - Orthogonal electrocardiograms (ECG) were recorded in male SHR at the age of 12 and 20 weeks, when systolic blood pressure (sBP) reached the average values of 165±3 mmHg and 195±12 mmHg, respectively. Age- and sex- matched normotensive Wistar Kyoto (WKY) rats were used as controls. VDP was calculated as a product of maximum QRS spatial vector magnitude and QRS duration. Left ventricular mass (LVM) was weighed after rats were sacrificed. RESULTS - LVM in SHR at 12 and 20 weeks of age (0.86±0.05 g and 1.05±0.07 g, respectively) was significantly higher as compared with that in WKY (0.65±0.07 g and 0.70±0.02 g). The increase in LVM closely correlated with the sBP increase. VDP did not reflect the increase in LVM in SHR. VDP was lower in SHR as compared with that in WKY, and the difference was significant at the age of 20 weeks (18.2mVms compared with 10.7mVms, p<0.01). On the contrary, a significant increase in the VDP was observed in the control WKY at the age of 20 weeks without changes in LVM. The changes in VDP were influenced mainly by the changes in QRSmax. CONCLUSION - LVM was not the major determinant of QRS voltage changes and consequently of the VDP. These data point to the importance of the nonspatial determinants of the recorded QRS voltage in terms of the solid angle theory.
Resumo:
El avance en la potencia de cómputo en nuestros días viene dado por la paralelización del procesamiento, dadas las características que disponen las nuevas arquitecturas de hardware. Utilizar convenientemente este hardware impacta en la aceleración de los algoritmos en ejecución (programas). Sin embargo, convertir de forma adecuada el algoritmo en su forma paralela es complejo, y a su vez, esta forma, es específica para cada tipo de hardware paralelo. En la actualidad los procesadores de uso general más comunes son los multicore, procesadores paralelos, también denominados Symmetric Multi-Processors (SMP). Hoy en día es difícil hallar un procesador para computadoras de escritorio que no tengan algún tipo de paralelismo del caracterizado por los SMP, siendo la tendencia de desarrollo, que cada día nos encontremos con procesadores con mayor numero de cores disponibles. Por otro lado, los dispositivos de procesamiento de video (Graphics Processor Units - GPU), a su vez, han ido desarrollando su potencia de cómputo por medio de disponer de múltiples unidades de procesamiento dentro de su composición electrónica, a tal punto que en la actualidad no es difícil encontrar placas de GPU con capacidad de 200 a 400 hilos de procesamiento paralelo. Estos procesadores son muy veloces y específicos para la tarea que fueron desarrollados, principalmente el procesamiento de video. Sin embargo, como este tipo de procesadores tiene muchos puntos en común con el procesamiento científico, estos dispositivos han ido reorientándose con el nombre de General Processing Graphics Processor Unit (GPGPU). A diferencia de los procesadores SMP señalados anteriormente, las GPGPU no son de propósito general y tienen sus complicaciones para uso general debido al límite en la cantidad de memoria que cada placa puede disponer y al tipo de procesamiento paralelo que debe realizar para poder ser productiva su utilización. Los dispositivos de lógica programable, FPGA, son dispositivos capaces de realizar grandes cantidades de operaciones en paralelo, por lo que pueden ser usados para la implementación de algoritmos específicos, aprovechando el paralelismo que estas ofrecen. Su inconveniente viene derivado de la complejidad para la programación y el testing del algoritmo instanciado en el dispositivo. Ante esta diversidad de procesadores paralelos, el objetivo de nuestro trabajo está enfocado en analizar las características especificas que cada uno de estos tienen, y su impacto en la estructura de los algoritmos para que su utilización pueda obtener rendimientos de procesamiento acordes al número de recursos utilizados y combinarlos de forma tal que su complementación sea benéfica. Específicamente, partiendo desde las características del hardware, determinar las propiedades que el algoritmo paralelo debe tener para poder ser acelerado. Las características de los algoritmos paralelos determinará a su vez cuál de estos nuevos tipos de hardware son los mas adecuados para su instanciación. En particular serán tenidos en cuenta el nivel de dependencia de datos, la necesidad de realizar sincronizaciones durante el procesamiento paralelo, el tamaño de datos a procesar y la complejidad de la programación paralela en cada tipo de hardware. Today´s advances in high-performance computing are driven by parallel processing capabilities of available hardware architectures. These architectures enable the acceleration of algorithms when thes ealgorithms are properly parallelized and exploit the specific processing power of the underneath architecture. Most current processors are targeted for general pruposes and integrate several processor cores on a single chip, resulting in what is known as a Symmetric Multiprocessing (SMP) unit. Nowadays even desktop computers make use of multicore processors. Meanwhile, the industry trend is to increase the number of integrated rocessor cores as technology matures. On the other hand, Graphics Processor Units (GPU), originally designed to handle only video processing, have emerged as interesting alternatives to implement algorithm acceleration. Current available GPUs are able to implement from 200 to 400 threads for parallel processing. Scientific computing can be implemented in these hardware thanks to the programability of new GPUs that have been denoted as General Processing Graphics Processor Units (GPGPU).However, GPGPU offer little memory with respect to that available for general-prupose processors; thus, the implementation of algorithms need to be addressed carefully. Finally, Field Programmable Gate Arrays (FPGA) are programmable devices which can implement hardware logic with low latency, high parallelism and deep pipelines. Thes devices can be used to implement specific algorithms that need to run at very high speeds. However, their programmability is harder that software approaches and debugging is typically time-consuming. In this context where several alternatives for speeding up algorithms are available, our work aims at determining the main features of thes architectures and developing the required know-how to accelerate algorithm execution on them. We look at identifying those algorithms that may fit better on a given architecture as well as compleme
Resumo:
A composting Heat Extraction Unit (HEU) was designed to utilise waste heat from decaying organic matter for a variety of heating application The aim was to construct an insulated small scale, sealed, organic matter filled container. In this vessel a process fluid within embedded pipes would absorb thermal energy from the hot compost and transport it to an external heat exchanger. Experiments were conducted on the constituent parts and the final design comprised of a 2046 litre container insulated with polyurethane foam and kingspan with two arrays of qualpex piping embedded in the compost to extract heat. The thermal energy was used in horticultural trials by heating polytunnels using a radiator system during a winter/spring period. The compost derived energy was compared with conventional and renewable energy in the form of an electric fan heater and solar panel. The compost derived energy was able to raise polytunnel temperatures to 2-3°C above the control, with the solar panel contributing no thermal energy during the winter trial and the electric heater the most efficient maintaining temperature at its preset temperature of 10°C. Plants that were cultivated as performance indicators showed no significant difference in growth rates between the heat sources. A follow on experiment conducted using special growing mats for distributing compost thermal energy directly under the plants (Radish, Cabbage, Spinach and Lettuce) displayed more successful growth patterns than those in the control. The compost HEU was also used for more traditional space heating and hot water heating applications. A test space was successfully heated over two trials with varying insulation levels. Maximum internal temperature increases of 7°C and 13°C were recorded for building U-values of 1.6 and 0.53 W/m2K respectively using the HEU. The HEU successfully heated a 60 litre hot water cylinder for 32 days with maximum water temperature increases of 36.5°C recorded. Total energy recovered from the 435 Kg of compost within the HEU during the polytunnel growth trial was 76 kWh which is 3 kWh/day for the 25 days when the HEU was activated. With a mean coefficient of performance level of 6.8 calculated for the HEU the technology is energy efficient. Therefore the compost HEU developed here could be a useful renewable energy technology particularly for small scale rural dwellers and growers with access to significant quantities of organic matter
Resumo:
Liquid separation efficiency, liquid penetration, modeling, arrays of temperature, distribution, fluidized bed, two-phase-nozzle
Resumo:
The authors studied the rainfall in Pesqueira (Pernambuco, Brasil) in a period of 48 years (1910 through 1957) by the method of orthogonal polynomials, degrees up to the fourth having been tried. None of them was significant, so that it seems that no trend is present. The mean observed was 679.00 mm., with standard error of the mean 205.5 mm., and a 30.3% coefficient of variation. The 95% level of probability would include annual rainfall from 263.9 up to 1094.1mm.
Resumo:
This paper deals with the study by orthogonal polynomials of trends in the mean annual and mean monthly temperatures (in degrees Centigrade) in Campinas (State of São Paulo, Brasil), from 1890 up to 1956. Only 4 months were studied (January, April, July and October) taken as typical of their respective season. For the annual averages both linear and quadratic components were significant, the regression equation being y = 19.95 - 0.0219 x + 0.00057 x², where y is the temperature (in degrees Centigrade) and x is the number of years after 1889. Thus 1890 corresponds to x = 1, 1891, to x = 2, etc. The equation shows a minimum for the year 1908, with a calculated mean y = 19.74. The expected means by the regression equation are given below. Anual temperature means for Campinas (SP, Brasil) calculated by the regression equation Year Annual mean (Degrees Centigrade) 1890 19.93 1900 10.78 1908 19.74 (minimum) 1010 19.75 1920 19.82 1930 20.01 1940 20.32 1950 20.74 1956 21.05 The mean for 67 years was 20.08°C with standard error of the mean 0.08°G. For January the regression equation was y = 23.08 - 0.0661 x + 0.00122 x², with a minimum of 22.19°C for 1916. The average for 67 years was 22.70°C, with standard error 0.12°C. For April no component of regression was significant. The average was 20.42°C, with standard error 0.13°C. For July the regression equation was of first degree, y = 16.01 + 0.0140X. The average for 67 years was 16.49°C, with standard error of the mean 0.14°C. Finally, for October the regression equation was y = 20.55 - 0.0362x + 0.00078x², with a minimum of 20.13°C for 1912. The average was 20.52°C, with standard error of the mean equal to 0.14°C.
Resumo:
We consider the Kudla-Millson lift from elliptic modular forms of weight (p+q)/2 to closed q-forms on locally symmetric spaces corresponding to the orthogonal group O(p,q). We study the L²-norm of the lift following the Rallis inner product formula. We compute the contribution at the Archimedian place. For locally symmetric spaces associated to even unimodular lattices, we obtain an explicit formula for the L²-norm of the lift, which often implies that the lift is injective. For O(p,2) we discuss how such injectivity results imply the surjectivity of the Borcherds lift.
Resumo:
There is recent interest in the generalization of classical factor models in which the idiosyncratic factors are assumed to be orthogonal and there are identification restrictions on cross-sectional and time dimensions. In this study, we describe and implement a Bayesian approach to generalized factor models. A flexible framework is developed to determine the variations attributed to common and idiosyncratic factors. We also propose a unique methodology to select the (generalized) factor model that best fits a given set of data. Applying the proposed methodology to the simulated data and the foreign exchange rate data, we provide a comparative analysis between the classical and generalized factor models. We find that when there is a shift from classical to generalized, there are significant changes in the estimates of the structures of the covariance and correlation matrices while there are less dramatic changes in the estimates of the factor loadings and the variation attributed to common factors.
Resumo:
BACKGROUND AND PURPOSE: Accurate placement of an external ventricular drain (EVD) for the treatment of hydrocephalus is of paramount importance for its functionality and in order to minimize morbidity and complications. The aim of this study was to compare two different drain insertion assistance tools with the traditional free-hand anatomical landmark method, and to measure efficacy, safety and precision. METHODS: Ten cadaver heads were prepared by opening large bone windows centered on Kocher's points on both sides. Nineteen physicians, divided in two groups (trainees and board certified neurosurgeons) performed EVD insertions. The target for the ventricular drain tip was the ipsilateral foramen of Monro. Each participant inserted the external ventricular catheter in three different ways: 1) free-hand by anatomical landmarks, 2) neuronavigation-assisted (NN), and 3) XperCT-guided (XCT). The number of ventricular hits and dangerous trajectories; time to proceed; radiation exposure of patients and physicians; distance of the catheter tip to target and size of deviations projected in the orthogonal plans were measured and compared. RESULTS: Insertion using XCT increased the probability of ventricular puncture from 69.2 to 90.2 % (p = 0.02). Non-assisted placements were significantly less precise (catheter tip to target distance 14.3 ± 7.4 mm versus 9.6 ± 7.2 mm, p = 0.0003). The insertion time to proceed increased from 3.04 ± 2.06 min. to 7.3 ± 3.6 min. (p < 0.001). The X-ray exposure for XCT was 32.23 mSv, but could be reduced to 13.9 mSv if patients were initially imaged in the hybrid-operating suite. No supplementary radiation exposure is needed for NN if patients are imaged according to a navigation protocol initially. CONCLUSION: This ex vivo study demonstrates a significantly improved accuracy and safety using either NN or XCT-assisted methods. Therefore, efforts should be undertaken to implement these new technologies into daily clinical practice. However, the accuracy versus urgency of an EVD placement has to be balanced, as the image-guided insertion technique will implicate a longer preparation time due to a specific image acquisition and trajectory planning.
Resumo:
Spatial heterogeneity, spatial dependence and spatial scale constitute key features of spatial analysis of housing markets. However, the common practice of modelling spatial dependence as being generated by spatial interactions through a known spatial weights matrix is often not satisfactory. While existing estimators of spatial weights matrices are based on repeat sales or panel data, this paper takes this approach to a cross-section setting. Specifically, based on an a priori definition of housing submarkets and the assumption of a multifactor model, we develop maximum likelihood methodology to estimate hedonic models that facilitate understanding of both spatial heterogeneity and spatial interactions. The methodology, based on statistical orthogonal factor analysis, is applied to the urban housing market of Aveiro, Portugal at two different spatial scales.
Resumo:
Functional RNA structures play an important role both in the context of noncoding RNA transcripts as well as regulatory elements in mRNAs. Here we present a computational study to detect functional RNA structures within the ENCODE regions of the human genome. Since structural RNAs in general lack characteristic signals in primary sequence, comparative approaches evaluating evolutionary conservation of structures are most promising. We have used three recently introduced programs based on either phylogenetic-stochastic context-free grammar (EvoFold) or energy directed folding (RNAz and AlifoldZ), yielding several thousand candidate structures (corresponding to approximately 2.7% of the ENCODE regions). EvoFold has its highest sensitivity in highly conserved and relatively AU-rich regions, while RNAz favors slightly GC-rich regions, resulting in a relatively small overlap between methods. Comparison with the GENCODE annotation points to functional RNAs in all genomic contexts, with a slightly increased density in 3'-UTRs. While we estimate a significant false discovery rate of approximately 50%-70% many of the predictions can be further substantiated by additional criteria: 248 loci are predicted by both RNAz and EvoFold, and an additional 239 RNAz or EvoFold predictions are supported by the (more stringent) AlifoldZ algorithm. Five hundred seventy RNAz structure predictions fall into regions that show signs of selection pressure also on the sequence level (i.e., conserved elements). More than 700 predictions overlap with noncoding transcripts detected by oligonucleotide tiling arrays. One hundred seventy-five selected candidates were tested by RT-PCR in six tissues, and expression could be verified in 43 cases (24.6%).
Resumo:
RESUME Durant les dernières années, les méthodes électriques ont souvent été utilisées pour l'investigation des structures de subsurface. L'imagerie électrique (Electrical Resistivity Tomography, ERT) est une technique de prospection non-invasive et spatialement intégrée. La méthode ERT a subi des améliorations significatives avec le développement de nouveaux algorithmes d'inversion et le perfectionnement des techniques d'acquisition. La technologie multicanale et les ordinateurs de dernière génération permettent la collecte et le traitement de données en quelques heures. Les domaines d'application sont nombreux et divers: géologie et hydrogéologie, génie civil et géotechnique, archéologie et études environnementales. En particulier, les méthodes électriques sont souvent employées dans l'étude hydrologique de la zone vadose. Le but de ce travail est le développement d'un système de monitorage 3D automatique, non- invasif, fiable, peu coûteux, basé sur une technique multicanale et approprié pour suivre les variations de résistivité électrique dans le sous-sol lors d'événements pluvieux. En raison des limitations techniques et afin d'éviter toute perturbation physique dans la subsurface, ce dispositif de mesure emploie une installation non-conventionnelle, où toutes les électrodes de courant sont placées au bord de la zone d'étude. Le dispositif le plus approprié pour suivre les variations verticales et latérales de la résistivité électrique à partir d'une installation permanente a été choisi à l'aide de modélisations numériques. Les résultats démontrent que le dispositif pôle-dipôle offre une meilleure résolution que le dispositif pôle-pôle et plus apte à détecter les variations latérales et verticales de la résistivité électrique, et cela malgré la configuration non-conventionnelle des électrodes. Pour tester l'efficacité du système proposé, des données de terrain ont été collectées sur un site d'étude expérimental. La technique de monitorage utilisée permet de suivre le processus d'infiltration 3D pendant des événements pluvieux. Une bonne corrélation est observée entre les résultats de modélisation numérique et les données de terrain, confirmant par ailleurs que le dispositif pôle-dipôle offre une meilleure résolution que le dispositif pôle-pôle. La nouvelle technique de monitorage 3D de résistivité électrique permet de caractériser les zones d'écoulement préférentiel et de caractériser le rôle de la lithologie et de la pédologie de manière quantitative dans les processus hydrologiques responsables d'écoulement de crue. ABSTRACT During the last years, electrical methods were often used for the investigation of subsurface structures. Electrical resistivity tomography (ERT) has been reported to be a useful non-invasive and spatially integrative prospecting technique. The ERT method provides significant improvements, with the developments of new inversion algorithms, and the increasing efficiency of data collection techniques. Multichannel technology and powerful computers allow collecting and processing resistivity data within few hours. Application domains are numerous and varied: geology and hydrogeology, civil engineering and geotechnics, archaeology and environmental studies. In particular, electrical methods are commonly used in hydrological studies of the vadose zone. The aim of this study was to develop a multichannel, automatic, non-invasive, reliable and inexpensive 3D monitoring system designed to follow electrical resistivity variations in soil during rainfall. Because of technical limitations and in order to not disturb the subsurface, the proposed measurement device uses a non-conventional electrode set-up, where all the current electrodes are located near the edges of the survey grid. Using numerical modelling, the most appropriate arrays were selected to detect vertical and lateral variations of the electrical resistivity in the framework of a permanent surveying installation system. The results show that a pole-dipole array has a better resolution than a pole-pole array and can successfully follow vertical and lateral resistivity variations despite the non-conventional electrode configuration used. Field data are then collected at a test site to assess the efficiency of the proposed monitoring technique. The system allows following the 3D infiltration processes during a rainfall event. A good correlation between the results of numerical modelling and field data results can be observed since the field pole-dipole data give a better resolution image than the pole-pole data. The new device and technique makes it possible to better characterize the zones of preferential flow and to quantify the role of lithology and pedology in flood- generating hydrological processes.