976 resultados para quality estimation
Resumo:
The objective of this article was to record reporting characteristics related to study quality of research published in major specialty dental journals with the highest impact factor (Journal of Endodontics, Journal of Oral and Maxillofacial Surgery, American Journal of Orthodontics and Dentofacial Orthopedics; Pediatric Dentistry, Journal of Clinical Periodontology, and International Journal of Prosthetic Dentistry). The included articles were classified into the following 3 broad subject categories: (1) cross-sectional (snap-shot), (2) observational, and (3) interventional. Multinomial logistic regression was conducted for effect estimation using the journal as the response and randomization, sample calculation, confounding discussed, multivariate analysis, effect measurement, and confidence intervals as the explanatory variables. The results showed that cross-sectional studies were the dominant design (55%), whereas observational investigations accounted for 13%, and interventions/clinical trials for 32%. Reporting on quality characteristics was low for all variables: random allocation (15%), sample size calculation (7%), confounding issues/possible confounders (38%), effect measurements (16%), and multivariate analysis (21%). Eighty-four percent of the published articles reported a statistically significant main finding and only 13% presented confidence intervals. The Journal of Clinical Periodontology showed the highest probability of including quality characteristics in reporting results among all dental journals.
Resumo:
For virtually all hospitals, utilization rates are a critical managerial indicator of efficiency and are determined in part by turnover time. Turnover time is defined as the time elapsed between surgeries, during which the operating room is cleaned and preparedfor the next surgery. Lengthier turnover times result in lower utilization rates, thereby hindering hospitals’ ability to maximize the numbers of patients that can be attended to. In this thesis, we analyze operating room data from a two year period provided byEvangelical Community Hospital in Lewisburg, Pennsylvania, to understand the variability of the turnover process. From the recorded data provided, we derive our best estimation of turnover time. Recognizing the importance of being able to properly modelturnover times in order to improve the accuracy of scheduling, we seek to fit distributions to the set of turnover times. We find that log-normal and log-logistic distributions are well-suited to turnover times, although further research must validate this finding. Wepropose that the choice of distribution depends on the hospital and, as a result, a hospital must choose whether to use the log-normal or the log-logistic distribution. Next, we use statistical tests to identify variables that may potentially influence turnover time. We find that there does not appear to be a correlation between surgerytime and turnover time across doctors. However, there are statistically significant differences between the mean turnover times across doctors. The final component of our research entails analyzing and explaining the benefits of introducing control charts as a quality control mechanism for monitoring turnover times in hospitals. Although widely instituted in other industries, control charts are notwidely adopted in healthcare environments, despite their potential benefits. A major component of our work is the development of control charts to monitor the stability of turnover times. These charts can be easily instituted in hospitals to reduce the variabilityof turnover times. Overall, our analysis uses operations research techniques to analyze turnover times and identify manners for improvement in lowering the mean turnover time and thevariability in turnover times. We provide valuable insight into a component of the surgery process that has received little attention, but can significantly affect utilization rates in hospitals. Most critically, an ability to more accurately predict turnover timesand a better understanding of the sources of variability can result in improved scheduling and heightened hospital staff and patient satisfaction. We hope that our findings can apply to many other hospital settings.
Resumo:
Submicroscopic changes in chromosomal DNA copy number dosage are common and have been implicated in many heritable diseases and cancers. Recent high-throughput technologies have a resolution that permits the detection of segmental changes in DNA copy number that span thousands of basepairs across the genome. Genome-wide association studies (GWAS) may simultaneously screen for copy number-phenotype and SNP-phenotype associations as part of the analytic strategy. However, genome-wide array analyses are particularly susceptible to batch effects as the logistics of preparing DNA and processing thousands of arrays often involves multiple laboratories and technicians, or changes over calendar time to the reagents and laboratory equipment. Failure to adjust for batch effects can lead to incorrect inference and requires inefficient post-hoc quality control procedures that exclude regions that are associated with batch. Our work extends previous model-based approaches for copy number estimation by explicitly modeling batch effects and using shrinkage to improve locus-specific estimates of copy number uncertainty. Key features of this approach include the use of diallelic genotype calls from experimental data to estimate batch- and locus-specific parameters of background and signal without the requirement of training data. We illustrate these ideas using a study of bipolar disease and a study of chromosome 21 trisomy. The former has batch effects that dominate much of the observed variation in quantile-normalized intensities, while the latter illustrates the robustness of our approach to datasets where as many as 25% of the samples have altered copy number. Locus-specific estimates of copy number can be plotted on the copy-number scale to investigate mosaicism and guide the choice of appropriate downstream approaches for smoothing the copy number as a function of physical position. The software is open source and implemented in the R package CRLMM available at Bioconductor (http:www.bioconductor.org).
Resumo:
This paper is a summary of the main contribu- tions of the PhD thesis published in [1]. The main research contributions of the thesis are driven by the research question how to design simple, yet efficient and robust run-time adaptive resource allocation schemes within the commu- nication stack of Wireless Sensor Network (WSN) nodes. The thesis addresses several problem domains with con- tributions on different layers of the WSN communication stack. The main contributions can be summarized as follows: First, a a novel run-time adaptive MAC protocol is intro- duced, which stepwise allocates the power-hungry radio interface in an on-demand manner when the encountered traffic load requires it. Second, the thesis outlines a metho- dology for robust, reliable and accurate software-based energy-estimation, which is calculated at network run- time on the sensor node itself. Third, the thesis evaluates several Forward Error Correction (FEC) strategies to adap- tively allocate the correctional power of Error Correcting Codes (ECCs) to cope with timely and spatially variable bit error rates. Fourth, in the context of TCP-based communi- cations in WSNs, the thesis evaluates distributed caching and local retransmission strategies to overcome the perfor- mance degrading effects of packet corruption and trans- mission failures when transmitting data over multiple hops. The performance of all developed protocols are eval- uated on a self-developed real-world WSN testbed and achieve superior performance over selected existing ap- proaches, especially where traffic load and channel condi- tions are suspect to rapid variations over time.
Resumo:
Instruments for on-farm determination of colostrum quality such as refractometers and densimeters are increasingly used in dairy farms. The colour of colostrum is also supposed to reflect its quality. A paler or mature milk-like colour is associated with a lower colostrum value in terms of its general composition compared with a more yellowish and darker colour. The objective of this study was to investigate the relationships between colour measurement of colostrum using the CIELAB colour space (CIE L*=from white to black, a*=from red to green, b*=from yellow to blue, chroma value G=visual perceived colourfulness) and its composition. Dairy cow colostrum samples (n=117) obtained at 4·7±1·5 h after parturition were analysed for immunoglobulin G (IgG) by ELISA and for fat, protein and lactose by infrared spectroscopy. For colour measurements, a calibrated spectrophotometer was used. At a cut-off value of 50 mg IgG/ml, colour measurement had a sensitivity of 50·0%, a specificity of 49·5%, and a negative predictive value of 87·9%. Colostral IgG concentration was not correlated with the chroma value G, but with relative lightness L*. While milk fat content showed a relationship to the parameters L*, a*, b* and G from the colour measurement, milk protein content was not correlated with a*, but with L*, b*, and G. Lactose concentration in colostrum showed only a relationship with b* and G. In conclusion, parameters of the colour measurement showed clear relationships to colostral IgG, fat, protein and lactose concentration in dairy cows. Implementation of colour measuring devices in automatic milking systems and milking parlours might be a potential instrument to access colostrum quality as well as detecting abnormal milk.
Resumo:
Cramér Rao Lower Bounds (CRLB) have become the standard for expression of uncertainties in quantitative MR spectroscopy. If properly interpreted as a lower threshold of the error associated with model fitting, and if the limits of its estimation are respected, CRLB are certainly a very valuable tool to give an idea of minimal uncertainties in magnetic resonance spectroscopy (MRS), although other sources of error may be larger. Unfortunately, it has also become standard practice to use relative CRLB expressed as a percentage of the presently estimated area or concentration value as unsupervised exclusion criterion for bad quality spectra. It is shown that such quality filtering with widely used threshold levels of 20% to 50% CRLB readily causes bias in the estimated mean concentrations of cohort data, leading to wrong or missed statistical findings-and if applied rigorously-to the failure of using MRS as a clinical instrument to diagnose disease characterized by low levels of metabolites. Instead, absolute CRLB in comparison to those of the normal group or CRLB in relation to normal metabolite levels may be more useful as quality criteria. Magn Reson Med, 2015. © 2015 Wiley Periodicals, Inc.
Resumo:
We present an application and sample independent method for the automatic discrimination of noise and signal in optical coherence tomography Bscans. The proposed algorithm models the observed noise probabilistically and allows for a dynamic determination of image noise parameters and the choice of appropriate image rendering parameters. This overcomes the observer variability and the need for a priori information about the content of sample images, both of which are challenging to estimate systematically with current systems. As such, our approach has the advantage of automatically determining crucial parameters for evaluating rendered image quality in a systematic and task independent way. We tested our algorithm on data from four different biological and nonbiological samples (index finger, lemon slices, sticky tape, and detector cards) acquired with three different experimental spectral domain optical coherence tomography (OCT) measurement systems including a swept source OCT. The results are compared to parameters determined manually by four experienced OCT users. Overall, our algorithm works reliably regardless of which system and sample are used and estimates noise parameters in all cases within the confidence interval of those found by observers.
Resumo:
BACKGROUND Estimation of glomerular filtration rate (eGFR) using a common formula for both adult and pediatric populations is challenging. Using inulin clearances (iGFRs), this study aims to investigate the existence of a precise age cutoff beyond which the Modification of Diet in Renal Disease (MDRD), the Chronic Kidney Disease Epidemiology Collaboration (CKD-EPI), or the Cockroft-Gault (CG) formulas, can be applied with acceptable precision. Performance of the new Schwartz formula according to age is also evaluated. METHOD We compared 503 iGFRs for 503 children aged between 33 months and 18 years to eGFRs. To define the most precise age cutoff value for each formula, a circular binary segmentation method analyzing the formulas' bias values according to the children's ages was performed. Bias was defined by the difference between iGFRs and eGFRs. To validate the identified cutoff, 30% accuracy was calculated. RESULTS For MDRD, CKD-EPI and CG, the best age cutoff was ≥14.3, ≥14.2 and ≤10.8 years, respectively. The lowest mean bias and highest accuracy were -17.11 and 64.7% for MDRD, 27.4 and 51% for CKD-EPI, and 8.31 and 77.2% for CG. The Schwartz formula showed the best performance below the age of 10.9 years. CONCLUSION For the MDRD and CKD-EPI formulas, the mean bias values decreased with increasing child age and these formulas were more accurate beyond an age cutoff of 14.3 and 14.2 years, respectively. For the CG and Schwartz formulas, the lowest mean bias values and the best accuracies were below an age cutoff of 10.8 and 10.9 years, respectively. Nevertheless, the accuracies of the formulas were still below the National Kidney Foundation Kidney Disease Outcomes Quality Initiative target to be validated in these age groups and, therefore, none of these formulas can be used to estimate GFR in children and adolescent populations.
Resumo:
It is widely acknowledged in theoretical and empirical literature that social relationships, comprising of structural measures (social networks) and functional measures (perceived social support) have an undeniable effect on health outcomes. However, the actual mechanism of this effect has yet to be clearly understood or explicated. In addition, comorbidity is found to adversely affect social relationships and health related quality of life (a valued outcome measure in cancer patients and survivors). ^ This cross sectional study uses selected baseline data (N=3088) from the Women's Healthy Eating and Living (WHEL) study. Lisrel 8.72 was used for the latent variable structural equation modeling. Due to the ordinal nature of the data, Weighted Least Squares (WLS) method of estimation using Asymptotic Distribution Free covariance matrices was chosen for this analysis. The primary exogenous predictor variables are Social Networks and Comorbidity; Perceived Social Support is the endogenous predictor variable. Three dimensions of HRQoL, physical, mental and satisfaction with current quality of life were the outcome variables. ^ This study hypothesizes and tests the mechanism and pathways between comorbidity, social relationships and HRQoL using latent variable structural equation modeling. After testing the measurement models of social networks and perceived social support, a structural model hypothesizing associations between the latent exogenous and endogenous variables was tested. The results of the study after listwise deletion (N=2131) mostly confirmed the hypothesized relationships (TLI, CFI >0.95, RMSEA = 0.05, p=0.15). Comorbidity was adversely associated with all three HRQoL outcomes. Strong ties were negatively associated with perceived social support; social network had a strong positive association with perceived social support, which served as a mediator between social networks and HRQoL. Mental health quality of life was the most adversely affected by the predictor variables. ^ This study is a preliminary look at the integration of structural and functional measures of social relationships, comorbidity and three HRQoL indicators using LVSEM. Developing stronger social networks and forming supportive relationships is beneficial for health outcomes such as HRQoL of cancer survivors. Thus, the medical community treating cancer survivors as well as the survivor's social networks need to be informed and cognizant of these possible relationships. ^
Resumo:
HIV/AIDS is a treatable although incurable disease that presents immense challenges to those infected including physical, social and psychological effects. As of 2009, an estimated 2.4 million people were living with HIV or AIDS in India, 0.3% of the country's population. In India, it is difficult to not only treat but also to track because it is associated with socio-economic factors such as illiteracy, social biases, poor sanitation, malnutrition and social class. Nevertheless, it is important to know the prevalence of HIV/AIDS for several reasons. At the individual level, the quality of life of people living with HIV/AIDS is markedly lower than their counterparts without the disease and is associated with challenges. At the community level, it is important to identify high risk groups, monitor prevention efforts, and allocate appropriate resources to target programs for the reduction of transmission of HIV. ^
Resumo:
The optimum quality that can be asymptotically achieved in the estimation of a probability p using inverse binomial sampling is addressed. A general definition of quality is used in terms of the risk associated with a loss function that satisfies certain assumptions. It is shown that the limit superior of the risk for p asymptotically small has a minimum over all (possibly randomized) estimators. This minimum is achieved by certain non-randomized estimators. The model includes commonly used quality criteria as particular cases. Applications to the non-asymptotic regime are discussed considering specific loss functions, for which minimax estimators are derived.
Resumo:
The evolution of the television market is led by 3DTV technology, and this tendency can accelerate during the next years according to expert forecasts. However, 3DTV delivery by broadcast networks is not currently developed enough, and acts as a bottleneck for the complete deployment of the technology. Thus, increasing interest is dedicated to ste-reo 3DTV formats compatible with current HDTV video equipment and infrastructure, as they may greatly encourage 3D acceptance. In this paper, different subsampling schemes for HDTV compatible transmission of both progressive and interlaced stereo 3DTV are studied and compared. The frequency characteristics and preserved frequency content of each scheme are analyzed, and a simple interpolation filter is specially designed. Finally, the advantages and disadvantages of the different schemes and filters are evaluated through quality testing on several progressive and interlaced video sequences.
Resumo:
In Video over IP services, perceived video quality heavily depends on parameters such as video coding and network Quality of Service. This paper proposes a model for the estimation of perceived video quality in video streaming and broadcasting services that combines the aforementioned parameters with other that depend mainly on the information contents of the video sequences. These fitting parameters are derived from the Spatial and Temporal Information contents of the sequences. This model does not require reference to the original video sequence so it can be used for online, real-time monitoring of perceived video quality in Video over IP services. Furthermore, this paper proposes a measurement workbench designed to acquire both training data for model fitting and test data for model validation. Preliminary results show good correlation between measured and predicted values.
Resumo:
In this paper, we consider a scenario where 3D scenes are modeled through a View+Depth representation. This representation is to be used at the rendering side to generate synthetic views for free viewpoint video. The encoding of both type of data (view and depth) is carried out using two H.264/AVC encoders. In this scenario we address the reduction of the encoding complexity of depth data. Firstly, an analysis of the Mode Decision and Motion Estimation processes has been conducted for both view and depth sequences, in order to capture the correlation between them. Taking advantage of this correlation, we propose a fast mode decision and motion estimation algorithm for the depth encoding. Results show that the proposed algorithm reduces the computational burden with a negligible loss in terms of quality of the rendered synthetic views. Quality measurements have been conducted using the Video Quality Metric.
Resumo:
Nowadays, Computational Fluid Dynamics (CFD) solvers are widely used within the industry to model fluid flow phenomenons. Several fluid flow model equations have been employed in the last decades to simulate and predict forces acting, for example, on different aircraft configurations. Computational time and accuracy are strongly dependent on the fluid flow model equation and the spatial dimension of the problem considered. While simple models based on perfect flows, like panel methods or potential flow models can be very fast to solve, they usually suffer from a poor accuracy in order to simulate real flows (transonic, viscous). On the other hand, more complex models such as the full Navier- Stokes equations provide high fidelity predictions but at a much higher computational cost. Thus, a good compromise between accuracy and computational time has to be fixed for engineering applications. A discretisation technique widely used within the industry is the so-called Finite Volume approach on unstructured meshes. This technique spatially discretises the flow motion equations onto a set of elements which form a mesh, a discrete representation of the continuous domain. Using this approach, for a given flow model equation, the accuracy and computational time mainly depend on the distribution of nodes forming the mesh. Therefore, a good compromise between accuracy and computational time might be obtained by carefully defining the mesh. However, defining an optimal mesh for complex flows and geometries requires a very high level expertize in fluid mechanics and numerical analysis, and in most cases a simple guess of regions of the computational domain which might affect the most the accuracy is impossible. Thus, it is desirable to have an automatized remeshing tool, which is more flexible with unstructured meshes than its structured counterpart. However, adaptive methods currently in use still have an opened question: how to efficiently drive the adaptation ? Pioneering sensors based on flow features generally suffer from a lack of reliability, so in the last decade more effort has been made in developing numerical error-based sensors, like for instance the adjoint-based adaptation sensors. While very efficient at adapting meshes for a given functional output, the latter method is very expensive as it requires to solve a dual set of equations and computes the sensor on an embedded mesh. Therefore, it would be desirable to develop a more affordable numerical error estimation method. The current work aims at estimating the truncation error, which arises when discretising a partial differential equation. These are the higher order terms neglected in the construction of the numerical scheme. The truncation error provides very useful information as it is strongly related to the flow model equation and its discretisation. On one hand, it is a very reliable measure of the quality of the mesh, therefore very useful in order to drive a mesh adaptation procedure. On the other hand, it is strongly linked to the flow model equation, so that a careful estimation actually gives information on how well a given equation is solved, which may be useful in the context of _ -extrapolation or zonal modelling. The following work is organized as follows: Chap. 1 contains a short review of mesh adaptation techniques as well as numerical error prediction. In the first section, Sec. 1.1, the basic refinement strategies are reviewed and the main contribution to structured and unstructured mesh adaptation are presented. Sec. 1.2 introduces the definitions of errors encountered when solving Computational Fluid Dynamics problems and reviews the most common approaches to predict them. Chap. 2 is devoted to the mathematical formulation of truncation error estimation in the context of finite volume methodology, as well as a complete verification procedure. Several features are studied, such as the influence of grid non-uniformities, non-linearity, boundary conditions and non-converged numerical solutions. This verification part has been submitted and accepted for publication in the Journal of Computational Physics. Chap. 3 presents a mesh adaptation algorithm based on truncation error estimates and compares the results to a feature-based and an adjoint-based sensor (in collaboration with Jorge Ponsín, INTA). Two- and three-dimensional cases relevant for validation in the aeronautical industry are considered. This part has been submitted and accepted in the AIAA Journal. An extension to Reynolds Averaged Navier- Stokes equations is also included, where _ -estimation-based mesh adaptation and _ -extrapolation are applied to viscous wing profiles. The latter has been submitted in the Proceedings of the Institution of Mechanical Engineers, Part G: Journal of Aerospace Engineering. Keywords: mesh adaptation, numerical error prediction, finite volume Hoy en día, la Dinámica de Fluidos Computacional (CFD) es ampliamente utilizada dentro de la industria para obtener información sobre fenómenos fluidos. La Dinámica de Fluidos Computacional considera distintas modelizaciones de las ecuaciones fluidas (Potencial, Euler, Navier-Stokes, etc) para simular y predecir las fuerzas que actúan, por ejemplo, sobre una configuración de aeronave. El tiempo de cálculo y la precisión en la solución depende en gran medida de los modelos utilizados, así como de la dimensión espacial del problema considerado. Mientras que modelos simples basados en flujos perfectos, como modelos de flujos potenciales, se pueden resolver rápidamente, por lo general aducen de una baja precisión a la hora de simular flujos reales (viscosos, transónicos, etc). Por otro lado, modelos más complejos tales como el conjunto de ecuaciones de Navier-Stokes proporcionan predicciones de alta fidelidad, a expensas de un coste computacional mucho más elevado. Por lo tanto, en términos de aplicaciones de ingeniería se debe fijar un buen compromiso entre precisión y tiempo de cálculo. Una técnica de discretización ampliamente utilizada en la industria es el método de los Volúmenes Finitos en mallas no estructuradas. Esta técnica discretiza espacialmente las ecuaciones del movimiento del flujo sobre un conjunto de elementos que forman una malla, una representación discreta del dominio continuo. Utilizando este enfoque, para una ecuación de flujo dado, la precisión y el tiempo computacional dependen principalmente de la distribución de los nodos que forman la malla. Por consiguiente, un buen compromiso entre precisión y tiempo de cálculo se podría obtener definiendo cuidadosamente la malla, concentrando sus elementos en aquellas zonas donde sea estrictamente necesario. Sin embargo, la definición de una malla óptima para corrientes y geometrías complejas requiere un nivel muy alto de experiencia en la mecánica de fluidos y el análisis numérico, así como un conocimiento previo de la solución. Aspecto que en la mayoría de los casos no está disponible. Por tanto, es deseable tener una herramienta que permita adaptar los elementos de malla de forma automática, acorde a la solución fluida (remallado). Esta herramienta es generalmente más flexible en mallas no estructuradas que con su homóloga estructurada. No obstante, los métodos de adaptación actualmente en uso todavía dejan una pregunta abierta: cómo conducir de manera eficiente la adaptación. Sensores pioneros basados en las características del flujo en general, adolecen de una falta de fiabilidad, por lo que en la última década se han realizado grandes esfuerzos en el desarrollo numérico de sensores basados en el error, como por ejemplo los sensores basados en el adjunto. A pesar de ser muy eficientes en la adaptación de mallas para un determinado funcional, este último método resulta muy costoso, pues requiere resolver un doble conjunto de ecuaciones: la solución y su adjunta. Por tanto, es deseable desarrollar un método numérico de estimación de error más asequible. El presente trabajo tiene como objetivo estimar el error local de truncación, que aparece cuando se discretiza una ecuación en derivadas parciales. Estos son los términos de orden superior olvidados en la construcción del esquema numérico. El error de truncación proporciona una información muy útil sobre la solución: es una medida muy fiable de la calidad de la malla, obteniendo información que permite llevar a cabo un procedimiento de adaptación de malla. Está fuertemente relacionado al modelo matemático fluido, de modo que una estimación precisa garantiza la idoneidad de dicho modelo en un campo fluido, lo que puede ser útil en el contexto de modelado zonal. Por último, permite mejorar la precisión de la solución resolviendo un nuevo sistema donde el error local actúa como término fuente (_ -extrapolación). El presenta trabajo se organiza de la siguiente manera: Cap. 1 contiene una breve reseña de las técnicas de adaptación de malla, así como de los métodos de predicción de los errores numéricos. En la primera sección, Sec. 1.1, se examinan las estrategias básicas de refinamiento y se presenta la principal contribución a la adaptación de malla estructurada y no estructurada. Sec 1.2 introduce las definiciones de los errores encontrados en la resolución de problemas de Dinámica Computacional de Fluidos y se examinan los enfoques más comunes para predecirlos. Cap. 2 está dedicado a la formulación matemática de la estimación del error de truncación en el contexto de la metodología de Volúmenes Finitos, así como a un procedimiento de verificación completo. Se estudian varias características que influyen en su estimación: la influencia de la falta de uniformidad de la malla, el efecto de las no linealidades del modelo matemático, diferentes condiciones de contorno y soluciones numéricas no convergidas. Esta parte de verificación ha sido presentada y aceptada para su publicación en el Journal of Computational Physics. Cap. 3 presenta un algoritmo de adaptación de malla basado en la estimación del error de truncación y compara los resultados con sensores de featured-based y adjointbased (en colaboración con Jorge Ponsín del INTA). Se consideran casos en dos y tres dimensiones, relevantes para la validación en la industria aeronáutica. Este trabajo ha sido presentado y aceptado en el AIAA Journal. También se incluye una extensión de estos métodos a las ecuaciones RANS (Reynolds Average Navier- Stokes), en donde adaptación de malla basada en _ y _ -extrapolación son aplicados a perfiles con viscosidad de alas. Este último trabajo se ha presentado en los Actas de la Institución de Ingenieros Mecánicos, Parte G: Journal of Aerospace Engineering. Palabras clave: adaptación de malla, predicción del error numérico, volúmenes finitos