973 resultados para Roundness errors
Resumo:
Entrevistando niños preescolares víctimas de abuso sexual y/o maltrato familiar: eficacia de los modelos de entrevista forense Se han entrevistado 135 niños preescolares con 3 modelos de entrevista diferentes para recordar un hecho emocionalmente significativo. Se concluye que el recuerdo correcto de los niños oscila entre el 70-90% y el porcentaje de errores de mensajes es del 5-6%. El estudio remarca la necesidad de establecer perfectamente las reglas de la entrevista y se destaca la ineficacia de las técnicas de la entrevista cognitiva en los niños de P3 y P4. Entrevistant infants pre-escolars víctimes d’abús sexual i/o maltractament familiar: eficàcia dels models d’entrevista forense Entrevistar infants en edat preescolar que han viscut una situació traumàtica és una tasca complexa que dins l’avaluació psicològica forense necessita d’un protocol perfectament delimitat, clar i temporalitzat. Per això, s’han seleccionat 3 protocols d’entrevista: el Protocol de Menors (PM) de Bull i Birch, el model del National Institute for Children Development (NICHD) de Michel Lamb, a partir del qual es va desenvolupar l’EASI (Evaluación del Abuso Sexual Infantojuvenil) i l’Entrevista Cognitiva (EC) de Fisher i Geiselman. La hipòtesi de partida vol comprovar si els anteriors models permeten obtenir volums informatius diferents en infants preescolars. Conseqüentment, els objectius han estat determinar quin dels models d’entrevista permet obtenir un volum informatiu amb més precisions i menys errors, dissenyar un model d’entrevista propi i consensuar aquest model. En el treball s’afegeixen esquemes pràctics que facilitin l’obertura, desenvolupament i tancament de l’entrevista forense. La metodologia ha reproduït el binomi infant - esdeveniment traumàtic, mitjançant la visualització i l’explicació d’un fet emocionalment significatiu amb facilitat per identificar-se: l’accident en bicicleta d’un infant que cau, es fa mal, sagna i el seu pare el cura. A partir d’aquí, hem entrevistat 135 infants de P3, P4 i P5, mitjançant els 3 models d’entrevista referits, enfrontant-los a una demanda específica: recordar i narrar aquest esdeveniment. S’ha conclòs que el nivell de record correcte, quan s’utilitza un model d’entrevista adequat amb els infants en edat preescolar, oscil•la entre el 70-90%, fet que permet defensar la confiança en els records dels infants. Es constata que el percentatge d’emissions incorrectes dels infants en edat preescolar és mínim, al voltant d’un 5-6%. L’estudi remarca la necessitat d’establir perfectament les regles de l’entrevista i, per últim, en destaca la ineficàcia de les tècniques de memòria de l’entrevista cognitiva en els infants de P3 i P4. En els de P5 es comencen a veure beneficis gràcies a la tècnica de la reinstauració contextual (RC), estant les altres tècniques fora de la comprensió i utilització dels infants d’aquestes edats. Interviewing preschoolers victims of sexual abuse and/or domestic abuse: Effectiveness of forensic interviews models 135 preschool children were interviewed with 3 different interview models in order to remember a significant emotional event. Authors conclude that the correct recall of children ranging from 70-90% and the percentage of error messages is 5-6%. It is necessary to fully establish the rules of the interview. The present research highlights the effectiveness of the cognitive interview techniques in children from P3 and P4.
Resumo:
Between 1927 and 1931 Marie Bonaparte had herself operated upon her clitoris three times. She did so against Freud's advice with whom she was in analysis. Among psychoanalysts these operations are still often regarded as "errors" or aberrations. But for Marie Bonaparte, who was in various ways familiar with physics and a somatic approach, surgery was the first choice, psychoanalysis only a possible alternative. She was not impressed by the skepticism of her colleagues, and adhered even more emphatically to her own strategy
Resumo:
El objetivo de este proyecto es desarrollar una aplicación web que sirva y gestione una tienda de música, tanto para su tienda física como para su tienda online. La aplicación Web está gestionada por los usuarios "administrador" y utilizada por los dos tipos de usuarios: administradores y clientes. Sus principales funciones son: Introducción y modificación de artículos. Gestión de entradas y salidas de productos. Gestión de pedidos. Obtención de datos para la gestión de la empresa. Minimizar los errores de gestión. Mejorar la imagen de la empresa. Ampliar los ámbitos de negocio. Correcta visualización de los artículos. Facilitar la búsqueda y compra de artículos.
Improving the performance of positive selection inference by filtering unreliable alignment regions.
Resumo:
Errors in the inferred multiple sequence alignment may lead to false prediction of positive selection. Recently, methods for detecting unreliable alignment regions were developed and were shown to accurately identify incorrectly aligned regions. While removing unreliable alignment regions is expected to increase the accuracy of positive selection inference, such filtering may also significantly decrease the power of the test, as positively selected regions are fast evolving, and those same regions are often those that are difficult to align. Here, we used realistic simulations that mimic sequence evolution of HIV-1 genes to test the hypothesis that the performance of positive selection inference using codon models can be improved by removing unreliable alignment regions. Our study shows that the benefit of removing unreliable regions exceeds the loss of power due to the removal of some of the true positively selected sites.
Resumo:
QUESTIONS UNDER STUDY AND PRINCIPLES: Estimating glomerular filtration rate (GFR) in hospitalised patients with chronic kidney disease (CKD) is important for drug prescription but it remains a difficult task. The purpose of this study was to investigate the reliability of selected algorithms based on serum creatinine, cystatin C and beta-trace protein to estimate GFR and the potential added advantage of measuring muscle mass by bioimpedance. In a prospective unselected group of patients hospitalised in a general internal medicine ward with CKD, GFR was evaluated using inulin clearance as the gold standard and the algorithms of Cockcroft, MDRD, Larsson (cystatin C), White (beta-trace) and MacDonald (creatinine and muscle mass by bioimpedance). 69 patients were included in the study. Median age (interquartile range) was 80 years (73-83); weight 74.7 kg (67.0-85.6), appendicular lean mass 19.1 kg (14.9-22.3), serum creatinine 126 μmol/l (100-149), cystatin C 1.45 mg/l (1.19-1.90), beta-trace protein 1.17 mg/l (0.99-1.53) and GFR measured by inulin 30.9 ml/min (22.0-43.3). The errors in the estimation of GFR and the area under the ROC curves (95% confidence interval) relative to inulin were respectively: Cockcroft 14.3 ml/min (5.55-23.2) and 0.68 (0.55-0.81), MDRD 16.3 ml/min (6.4-27.5) and 0.76 (0.64-0.87), Larsson 12.8 ml/min (4.50-25.3) and 0.82 (0.72-0.92), White 17.6 ml/min (11.5-31.5) and 0.75 (0.63-0.87), MacDonald 32.2 ml/min (13.9-45.4) and 0.65 (0.52-0.78). Currently used algorithms overestimate GFR in hospitalised patients with CKD. As a consequence eGFR targeted prescriptions of renal-cleared drugs, might expose patients to overdosing. The best results were obtained with the Larsson algorithm. The determination of muscle mass by bioimpedance did not provide significant contributions.
Resumo:
The paper presents an approach for mapping of precipitation data. The main goal is to perform spatial predictions and simulations of precipitation fields using geostatistical methods (ordinary kriging, kriging with external drift) as well as machine learning algorithms (neural networks). More practically, the objective is to reproduce simultaneously both the spatial patterns and the extreme values. This objective is best reached by models integrating geostatistics and machine learning algorithms. To demonstrate how such models work, two case studies have been considered: first, a 2-day accumulation of heavy precipitation and second, a 6-day accumulation of extreme orographic precipitation. The first example is used to compare the performance of two optimization algorithms (conjugate gradients and Levenberg-Marquardt) of a neural network for the reproduction of extreme values. Hybrid models, which combine geostatistical and machine learning algorithms, are also treated in this context. The second dataset is used to analyze the contribution of radar Doppler imagery when used as external drift or as input in the models (kriging with external drift and neural networks). Model assessment is carried out by comparing independent validation errors as well as analyzing data patterns.
Resumo:
High-field (>or=3 T) cardiac MRI is challenged by inhomogeneities of both the static magnetic field (B(0)) and the transmit radiofrequency field (B(1)+). The inhomogeneous B fields not only demand improved shimming methods but also impede the correct determination of the zero-order terms, i.e., the local resonance frequency f(0) and the radiofrequency power to generate the intended local B(1)+ field. In this work, dual echo time B(0)-map and dual flip angle B(1)+-map acquisition methods are combined to acquire multislice B(0)- and B(1)+-maps simultaneously covering the entire heart in a single breath hold of 18 heartbeats. A previously proposed excitation pulse shape dependent slice profile correction is tested and applied to reduce systematic errors of the multislice B(1)+-map. Localized higher-order shim correction values including the zero-order terms for frequency f(0) and radiofrequency power can be determined based on the acquired B(0)- and B(1)+-maps. This method has been tested in 7 healthy adult human subjects at 3 T and improved the B(0) field homogeneity (standard deviation) from 60 Hz to 35 Hz and the average B(1)+ field from 77% to 100% of the desired B(1)+ field when compared to more commonly used preparation methods.
Resumo:
Report for the scientific sojourn carried out at the University of California at Berkeley, from September to December 2007. Environmental niche modelling (ENM) techniques are powerful tools to predict species potential distributions. In the last ten years, a plethora of novel methodological approaches and modelling techniques have been developed. During three months, I stayed at the University of California, Berkeley, working under the supervision of Dr. David R. Vieites. The aim of our work was to quantify the error committed by these techniques, but also to test how an increase in the sample size affects the resultant predictions. Using MaxEnt software we generated distribution predictive maps, from different sample sizes, of the Eurasian quail (Coturnix coturnix) in the Iberian Peninsula. The quail is a generalist species from a climatic point of view, but an habitat specialist. The resultant distribution maps were compared with the real distribution of the species. This distribution was obtained from recent bird atlases from Spain and Portugal. Results show that ENM techniques can have important errors when predicting the species distribution of generalist species. Moreover, an increase of sample size is not necessary related with a better performance of the models. We conclude that a deep knowledge of the species’ biology and the variables affecting their distribution is crucial for an optimal modelling. The lack of this knowledge can induce to wrong conclusions.
Resumo:
La empresa Eatout quiere crear un Cuadro de Mando desde el que llevar el control de ventas de los restaurantes. Actualmente se comprueba qué locales faltan por cerrar la jornada de ventas y se lleva un registro de los errores desde una hoja de cálculo. Con este proyecto se pretende agilizar y facilitar la gestión de ventas, y analizar las posibles causas de esas faltas. Para ello, se creará una aplicación en .NET desde la que gestionarán los cierres que falten por realizar en una jornada indicando cuál ha sido el motivo. Después se analizarán estos datos a través de la herramienta de Business Objects de SAP creando un Cuadro de Mando.
Resumo:
La tolerancia a fallos es una línea de investigación que ha adquirido una importancia relevante con el aumento de la capacidad de cómputo de los súper-computadores actuales. Esto es debido a que con el aumento del poder de procesamiento viene un aumento en la cantidad de componentes que trae consigo una mayor cantidad de fallos. Las estrategias de tolerancia a fallos actuales en su mayoría son centralizadas y estas no escalan cuando se utiliza una gran cantidad de procesos, dado que se requiere sincronización entre todos ellos para realizar las tareas de tolerancia a fallos. Además la necesidad de mantener las prestaciones en programas paralelos es crucial, tanto en presencia como en ausencia de fallos. Teniendo en cuenta lo citado, este trabajo se ha centrado en una arquitectura tolerante a fallos descentralizada (RADIC – Redundant Array of Distributed and Independant Controllers) que busca mantener las prestaciones iniciales y garantizar la menor sobrecarga posible para reconfigurar el sistema en caso de fallos. La implementación de esta arquitectura se ha llevado a cabo en la librería de paso de mensajes denominada Open MPI, la misma es actualmente una de las más utilizadas en el mundo científico para la ejecución de programas paralelos que utilizan una plataforma de paso de mensajes. Las pruebas iniciales demuestran que el sistema introduce mínima sobrecarga para llevar a cabo las tareas correspondientes a la tolerancia a fallos. MPI es un estándar por defecto fail-stop, y en determinadas implementaciones que añaden cierto nivel de tolerancia, las estrategias más utilizadas son coordinadas. En RADIC cuando ocurre un fallo el proceso se recupera en otro nodo volviendo a un estado anterior que ha sido almacenado previamente mediante la utilización de checkpoints no coordinados y la relectura de mensajes desde el log de eventos. Durante la recuperación, las comunicaciones con el proceso en cuestión deben ser retrasadas y redirigidas hacia la nueva ubicación del proceso. Restaurar procesos en un lugar donde ya existen procesos sobrecarga la ejecución disminuyendo las prestaciones, por lo cual en este trabajo se propone la utilización de nodos spare para la recuperar en ellos a los procesos que fallan, evitando de esta forma la sobrecarga en nodos que ya tienen trabajo. En este trabajo se muestra un diseño propuesto para gestionar de un modo automático y descentralizado la recuperación en nodos spare en un entorno Open MPI y se presenta un análisis del impacto en las prestaciones que tiene este diseño. Resultados iniciales muestran una degradación significativa cuando a lo largo de la ejecución ocurren varios fallos y no se utilizan spares y sin embargo utilizándolos se restablece la configuración inicial y se mantienen las prestaciones.
Resumo:
RATIONALE AND OBJECTIVES: To determine optimum spatial resolution when imaging peripheral arteries with magnetic resonance angiography (MRA). MATERIALS AND METHODS: Eight vessel diameters ranging from 1.0 to 8.0 mm were simulated in a vascular phantom. A total of 40 three-dimensional flash MRA sequences were acquired with incremental variations of fields of view, matrix size, and slice thickness. The accurately known eight diameters were combined pairwise to generate 22 "exact" degrees of stenosis ranging from 42% to 87%. Then, the diameters were measured in the MRA images by three independent observers and with quantitative angiography (QA) software and used to compute the degrees of stenosis corresponding to the 22 "exact" ones. The accuracy and reproducibility of vessel diameter measurements and stenosis calculations were assessed for vessel size ranging from 6 to 8 mm (iliac artery), 4 to 5 mm (femoro-popliteal arteries), and 1 to 3 mm (infrapopliteal arteries). Maximum pixel dimension and slice thickness to obtain a mean error in stenosis evaluation of less than 10% were determined by linear regression analysis. RESULTS: Mean errors on stenosis quantification were 8.8% +/- 6.3% for 6- to 8-mm vessels, 15.5% +/- 8.2% for 4- to 5-mm vessels, and 18.9% +/- 7.5% for 1- to 3-mm vessels. Mean errors on stenosis calculation were 12.3% +/- 8.2% for observers and 11.4% +/- 15.1% for QA software (P = .0342). To evaluate stenosis with a mean error of less than 10%, maximum pixel surface, the pixel size in the phase direction, and the slice thickness should be less than 1.56 mm2, 1.34 mm, 1.70 mm, respectively (voxel size 2.65 mm3) for 6- to 8-mm vessels; 1.31 mm2, 1.10 mm, 1.34 mm (voxel size 1.76 mm3), for 4- to 5-mm vessels; and 1.17 mm2, 0.90 mm, 0.9 mm (voxel size 1.05 mm3) for 1- to 3-mm vessels. CONCLUSION: Higher spatial resolution than currently used should be selected for imaging peripheral vessels.
Resumo:
OBJECTIVE: To assess whether formatting the medical order sheet has an effect on the accuracy and security of antibiotics prescription. DESIGN: Prospective assessment of antibiotics prescription over time, before and after the intervention, in comparison with a control ward. SETTING: The medical and surgical intensive care unit (ICU) of a university hospital. PATIENTS: All patients hospitalized in the medical or surgical ICU between February 1 and April 30, 1997, and July 1 and August 31, 2000, for whom antibiotics were prescribed. INTERVENTION: Formatting of the medical order sheet in the surgical ICU in 1998. MEASUREMENTS AND MAIN RESULTS: Compliance with the American Society of Hospital Pharmacists' criteria for prescription safety was measured. The proportion of safe orders increased in both units, but the increase was 4.6 times greater in the surgical ICU (66% vs. 74% in the medical ICU and 48% vs. 74% in the surgical ICU). For unsafe orders, the proportion of ambiguous orders decreased by half in the medical ICU (9% vs. 17%) and nearly disappeared in the surgical ICU (1% vs. 30%). The only missing criterion remaining in the surgical ICU was the drug dose unit, which could not be preformatted. The aim of antibiotics prescription (either prophylactic or therapeutic) was indicated only in 51% of the order sheets. CONCLUSIONS: Formatting of the order sheet markedly increased security of antibiotics prescription. These findings must be confirmed in other settings and with different drug classes. Formatting the medical order sheet decreases the potential for prescribing errors before full computerized prescription is available.