983 resultados para Modified Berlekamp-Massey algorithm


Relevância:

20.00% 20.00%

Publicador:

Resumo:

Large amplitude oscillatory shear (LAOS) coupled with Fourier transform rheology (FTR) was used for the first time to characterize the large deformation behavior of selected bituminous binders at 20 C. Two polymer modified bitumens (PMB) containing recycled EVA and HDPE and two unmodified bitumens were tested with LAOS-FTR. The LAOS-FTR response of all binders was compared at same frequency, at same Deborah number (by tuning the frequency to the relaxation time of each binder) and at same phase shift angle d (by tuning the frequency to the one corresponding to d = 50 in the SAOS response of each sample). In all the approaches, LAOS-FTR results allowed to differentiate between all the nonlinear mechanical characteristics of the tested binders. All binders show LAOS-FTR patterns reminiscent from colloidal dispersions and emulsions. EVA PMB was less prone to strain-induced microstructural changes when compared to HDPE PMB which showed larger values of nonlinear FTR parameters for the range of shear strains tested in LAOS.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Cardiopulmonary arrest is a medical emergency in which the lapse of time between event onset and the initiation of measures of basic and advanced support, as well as the correct care based on specific protocols for each clinical situation, constitute decisive factors for a successful therapy. Cardiopulmonary arrest care cannot be restricted to the hospital setting because of its fulminant nature. This necessitates the creation of new concepts, strategies and structures, such as the concept of life chain, cardio-pulmonary resuscitation courses for professionals who work in emergency medical services, the automated external defibrillator, the implantable cardioverter-defibrillator, and mobile intensive care units, among others. New concepts, strategies and structures motivated by new advances have also modified the treatment and improved the results of cardiopulmonary resuscitation in the hospital setting. Among them, we can cite the concept of cerebral resuscitation, the application of the life chain, the creation of the universal life support algorithm, the adjustment of drug doses, new techniques - measure of the end-tidal carbon dioxide levels and of the coronary perfusion pressure - and new drugs under research.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Versão dos autores para esta publicação.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

El objetivo general de este proyecto de investigación es diseñar, desarrollar y optimizar superficies con propiedades especificas para ser utilizadas como sensores y biosensores, materiales biocompatibles, columnas para separaciones por electroforesis capilar, matrices para la liberación controlada de fármacos y sorbentes para remediación ambiental. Para concretar este objetivo, se propone específicamente modificar superficies o particulas apuntando a optimizar un sistema concreto relevante en aplicaciones farmaceuticas, ambientales o biomedicas: 1. Modificacion de arcillas naturales o sinteticas para desarrollar matrices portadoras de farmacos o sorbentes para remediacion ambiental:1.1 Estudiar ilitas modificadas con Fe(III) para maximizar las propiedades adsortivas frente a aniones contaminantes como arsenico. 1.2 Sintetizar LDH de Al y Mg modificados con compuestos de interés farmacéutico para diseñar sistemas de liberación controlada.2. Modificación de canales de chips y electrodos para optimizar la separación, detección y cuantificación de compuestos farmacéutico: 2.1 Diseñar y construir microchips para la separación por EC de compuestos de base fenólica.2.2 Evaluar polímeros que mejoren la respuesta y/o estabilidad de electrodos de Carbono para ser usados como detectores amperométrico de compuestos de base fenólica en sistemas FIA y miniaturizados de análisis integrados.3. Modificación de superficies sólidas con biomoléculas para el desarrollo y optimización de superficies de bio-reconocimiento:3.1 Evaluar el comportamiento de superficies de titanio modificadas con TiO2 y depósitos inorgánicos frente a la interacción con proteínas plasmáticas (PP) para el análisis de la biocompatibilidad superficial.3.2 Diseñar y desarrollar superficies biofuncionales para el reconocimiento especifico de D-aminoácidos, anticuerpos en pacientes chagásicos y simple hebra de ADN. Las técnicas que se emplearán para llevar a cabo el proyecto dependen del tipo de sistema de estudio. En particular los estudios correspondientes al objetivo 1 se realizarán mediante análisis químicos, térmico, DXR, SEM, IR, BET así como mediante titulaciones ácido-base potenciométricas, movilidades electroforéticas, cinética e isotermas de adsorción.En general para desarrollar el objetivo 2 se utilizarán técnicas electroquímicas clásicas para la caracterización de los electrodos, los que luego se utilizarán como detectores en un sistema FIA amperométrico, mientras que los microchips se emplearán en electroforesis capilar para la separación de diferentes compuestos de interés farmacéutico.Finalmente, el objetivo 3 se llevará a cabo por un lado modificando electrodos de titanio con distintos depósitos (electroquímicas, sol-gel, térmicas) de TiO2 e hidroxiapatita y evaluando la interacción con proteínas plasmáticas para analizar la biocompatibilidad de los materiales preparados. Por otro lado, se estudiará el proceso de adsorción-desorción de D-aminoácido oxidasa, antígenos del T. Cruzi y ADN de simple hebra para optmizar la capacidad de bio-reconocimiento superficial de D-aminoácidos, anticuerpos de chagásicos y de cadena complementaria de ADN. Para concretar este objetivo se utilizarán técnicas electroquímicas, espectroscópicas y microscopias.Debido al carácter multidisciplinario del presente proyecto de investigación, su ejecución se llevara a cabo a través de la colaboración de investigadores pertenecientes a distintas áreas de la Química y permitirá continuar con la formación de recursos humanos mediante la realización de tesis doctorales y estadías postdoctorales.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

El avance en la potencia de cómputo en nuestros días viene dado por la paralelización del procesamiento, dadas las características que disponen las nuevas arquitecturas de hardware. Utilizar convenientemente este hardware impacta en la aceleración de los algoritmos en ejecución (programas). Sin embargo, convertir de forma adecuada el algoritmo en su forma paralela es complejo, y a su vez, esta forma, es específica para cada tipo de hardware paralelo. En la actualidad los procesadores de uso general más comunes son los multicore, procesadores paralelos, también denominados Symmetric Multi-Processors (SMP). Hoy en día es difícil hallar un procesador para computadoras de escritorio que no tengan algún tipo de paralelismo del caracterizado por los SMP, siendo la tendencia de desarrollo, que cada día nos encontremos con procesadores con mayor numero de cores disponibles. Por otro lado, los dispositivos de procesamiento de video (Graphics Processor Units - GPU), a su vez, han ido desarrollando su potencia de cómputo por medio de disponer de múltiples unidades de procesamiento dentro de su composición electrónica, a tal punto que en la actualidad no es difícil encontrar placas de GPU con capacidad de 200 a 400 hilos de procesamiento paralelo. Estos procesadores son muy veloces y específicos para la tarea que fueron desarrollados, principalmente el procesamiento de video. Sin embargo, como este tipo de procesadores tiene muchos puntos en común con el procesamiento científico, estos dispositivos han ido reorientándose con el nombre de General Processing Graphics Processor Unit (GPGPU). A diferencia de los procesadores SMP señalados anteriormente, las GPGPU no son de propósito general y tienen sus complicaciones para uso general debido al límite en la cantidad de memoria que cada placa puede disponer y al tipo de procesamiento paralelo que debe realizar para poder ser productiva su utilización. Los dispositivos de lógica programable, FPGA, son dispositivos capaces de realizar grandes cantidades de operaciones en paralelo, por lo que pueden ser usados para la implementación de algoritmos específicos, aprovechando el paralelismo que estas ofrecen. Su inconveniente viene derivado de la complejidad para la programación y el testing del algoritmo instanciado en el dispositivo. Ante esta diversidad de procesadores paralelos, el objetivo de nuestro trabajo está enfocado en analizar las características especificas que cada uno de estos tienen, y su impacto en la estructura de los algoritmos para que su utilización pueda obtener rendimientos de procesamiento acordes al número de recursos utilizados y combinarlos de forma tal que su complementación sea benéfica. Específicamente, partiendo desde las características del hardware, determinar las propiedades que el algoritmo paralelo debe tener para poder ser acelerado. Las características de los algoritmos paralelos determinará a su vez cuál de estos nuevos tipos de hardware son los mas adecuados para su instanciación. En particular serán tenidos en cuenta el nivel de dependencia de datos, la necesidad de realizar sincronizaciones durante el procesamiento paralelo, el tamaño de datos a procesar y la complejidad de la programación paralela en cada tipo de hardware. Today´s advances in high-performance computing are driven by parallel processing capabilities of available hardware architectures. These architectures enable the acceleration of algorithms when thes ealgorithms are properly parallelized and exploit the specific processing power of the underneath architecture. Most current processors are targeted for general pruposes and integrate several processor cores on a single chip, resulting in what is known as a Symmetric Multiprocessing (SMP) unit. Nowadays even desktop computers make use of multicore processors. Meanwhile, the industry trend is to increase the number of integrated rocessor cores as technology matures. On the other hand, Graphics Processor Units (GPU), originally designed to handle only video processing, have emerged as interesting alternatives to implement algorithm acceleration. Current available GPUs are able to implement from 200 to 400 threads for parallel processing. Scientific computing can be implemented in these hardware thanks to the programability of new GPUs that have been denoted as General Processing Graphics Processor Units (GPGPU).However, GPGPU offer little memory with respect to that available for general-prupose processors; thus, the implementation of algorithms need to be addressed carefully. Finally, Field Programmable Gate Arrays (FPGA) are programmable devices which can implement hardware logic with low latency, high parallelism and deep pipelines. Thes devices can be used to implement specific algorithms that need to run at very high speeds. However, their programmability is harder that software approaches and debugging is typically time-consuming. In this context where several alternatives for speeding up algorithms are available, our work aims at determining the main features of thes architectures and developing the required know-how to accelerate algorithm execution on them. We look at identifying those algorithms that may fit better on a given architecture as well as compleme

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Background:Vascular remodeling, the dynamic dimensional change in face of stress, can assume different directions as well as magnitudes in atherosclerotic disease. Classical measurements rely on reference to segments at a distance, risking inappropriate comparison between dislike vessel portions.Objective:to explore a new method for quantifying vessel remodeling, based on the comparison between a given target segment and its inferred normal dimensions.Methods:Geometric parameters and plaque composition were determined in 67 patients using three-vessel intravascular ultrasound with virtual histology (IVUS-VH). Coronary vessel remodeling at cross-section (n = 27.639) and lesion (n = 618) levels was assessed using classical metrics and a novel analytic algorithm based on the fractional vessel remodeling index (FVRI), which quantifies the total change in arterial wall dimensions related to the estimated normal dimension of the vessel. A prediction model was built to estimate the normal dimension of the vessel for calculation of FVRI.Results:According to the new algorithm, “Ectatic” remodeling pattern was least common, “Complete compensatory” remodeling was present in approximately half of the instances, and “Negative” and “Incomplete compensatory” remodeling types were detected in the remaining. Compared to a traditional diagnostic scheme, FVRI-based classification seemed to better discriminate plaque composition by IVUS-VH.Conclusion:Quantitative assessment of coronary remodeling using target segment dimensions offers a promising approach to evaluate the vessel response to plaque growth/regression.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Magdeburg, Univ., Fak. für Informatik, Diss., 2015

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The parameterized expectations algorithm (PEA) involves a long simulation and a nonlinear least squares (NLS) fit, both embedded in a loop. Both steps are natural candidates for parallelization. This note shows that parallelization can lead to important speedups for the PEA. I provide example code for a simple model that can serve as a template for parallelization of more interesting models, as well as a download link for an image of a bootable CD that allows creation of a cluster and execution of the example code in minutes, with no need to install any software.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

L’objectiu d’aquest projecte que consisteix a elaborar un algoritme d’optimització que permeti, mitjançant un ajust de dades per mínims quadrats, la extracció dels paràmetres del circuit equivalent que composen el model teòric d’un ressonador FBAR, a partir de les mesures dels paràmetres S. Per a dur a terme aquest treball, es desenvolupa en primer lloc tota la teoria necessària de ressonadors FBAR. Començant pel funcionament i l’estructura, i mostrant especial interès en el modelat d’aquests ressonadors mitjançant els models de Mason, Butterworth Van-Dyke i BVD Modificat. En segon terme, s’estudia la teoria sobre optimització i programació No-Lineal. Un cop s’ha exposat la teoria, es procedeix a la descripció de l’algoritme implementat. Aquest algoritme utilitza una estratègia de múltiples passos que agilitzen l'extracció dels paràmetres del ressonador.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Intraoperative cardiac imaging plays a key role during transcatheter aortic valve replacement. In recent years, new techniques and new tools for improved image quality and virtual navigation have been proposed, in order to simplify and standardize stent valve positioning and implantation. But routine performance of the new techniques may require major economic investments or specific knowledge and skills and, for this reason, they may not be accessible to the majority of cardiac centres involved in transcatheter valve replacement projects. Additionally, they still require injections of contrast medium to obtain computed images. Therefore, we have developed and describe here a very simple and intuitive method of positioning balloon-expandable stent valves, which represents the evolution of the 'dumbbell' technique for echocardiography-guided transcatheter valve replacement without angiography. This method, based on the partial inflation of the balloon catheter during positioning, traps the crimped valve in the aortic valve orifice and, consequently, very near to the ideal landing zone. It does not require specific echocardiographic knowledge; it does not require angiographies that increase the risk of postoperative kidney failure in elderly patients, and it can be also performed in centres not equipped with a hybrid operating room.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Report for the scientific sojourn carried out at the Institut de Biologia Molecular de Barcelona of the CSIC –state agency – from april until september 2007. Topoisomerase I is an essential nuclear enzyme that modulates the topological status of DNA, facilitating DNA helix unwinding during replication and transcription. We have prepared the oligonucleotide-peptide conjugate Ac-NLeu-Asn-Tyr(p-3’TTCAGAAGC5’)-LeuC-CONH-(CH2)6-OH as model compound for NMR studies of the Topoisomerase I- DNA complex. Special attention was made on the synthetic aspects for the preparation of this challenging compound especially solid supports and protecting groups. The desired peptide was obtained although we did not achieve the amount of the conjugate needed for NMR studies. Most probably the low yield is due to the intrinsic sensitive to hydrolysis of the phosphate bond between oligonucleotide and tyrosine. We have started the synthesis and the structural characterization of oligonucleotides carrying intercalating compounds. At the present state we have obtained model duplex and quadruplex sequences modified with acridine and NMR studies are underway. In addition to this project we have successfully resolved the structure of a fusion peptide derived from hepatitis C virus envelope synthesized by the group of Dr. Haro and we have synthesized and started the characterization of a modified G-quadruplex.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Matrix effects, which represent an important issue in liquid chromatography coupled to mass spectrometry or tandem mass spectrometry detection, should be closely assessed during method development. In the case of quantitative analysis, the use of stable isotope-labelled internal standard with physico-chemical properties and ionization behaviour similar to the analyte is recommended. In this paper, an example of the choice of a co-eluting deuterated internal standard to compensate for short-term and long-term matrix effect in the case of chiral (R,S)-methadone plasma quantification is reported. The method was fully validated over a concentration range of 5-800 ng/mL for each methadone enantiomer with satisfactory relative bias (-1.0 to 1.0%), repeatability (0.9-4.9%) and intermediate precision (1.4-12.0%). From the results obtained during validation, a control chart process during 52 series of routine analysis was established using both intermediate precision standard deviation and FDA acceptance criteria. The results of routine quality control samples were generally included in the +/-15% variability around the target value and mainly in the two standard deviation interval illustrating the long-term stability of the method. The intermediate precision variability estimated in method validation was found to be coherent with the routine use of the method. During this period, 257 trough concentration and 54 peak concentration plasma samples of patients undergoing (R,S)-methadone treatment were successfully analysed for routine therapeutic drug monitoring.