982 resultados para Branch-cut method


Relevância:

30.00% 30.00%

Publicador:

Resumo:

Diabetic nephropathy and end-stage renal failure are still a major cause of mortality amongst patients with diabetes mellitus (DM). In this study, we evaluated the Clinitek-Microalbumin (CM) screening test strip for the detection of microalbuminuria (MA) in a random morning spot urine in comparison with the quantitative assessment of albuminuria in the timed overnight urine collection ("gold standard"). One hundred thirty-four children, adolescents, and young adults with insulin-dependent DM Type 1 were studied at 222 outpatient visits. Because of urinary tract infection and/or haematuria, the data of 13 visits were excluded. Finally, 165 timed overnight urine were collected in the remaining 209 visits (79% sample per visit rate). Ten (6.1%) patients presented MA of > or =15 microg/min. In comparison however, 200 spot urine could be screened (96% sample/visit rate) yielding a significant increase in compliance and screening rate (P<.001, McNemar test). Furthermore, at 156 occasions, the gold standard and CM could be directly compared. The sensitivity and the specificity for CM in the spot urine (cut-off > or =30 mg albumin/l) were 0.89 [95% confidence interval (CI) 0.56-0.99] and 0.73 (CI 0.66-0.80), respectively. The positive and negative predictive value were 0.17 (CI 0.08-0.30) and 0.99 (CI 0.95-1.00), respectively. Considering CM albumin-to-creatinine ratio, the results were poorer than with the albumin concentration alone. Using CM instead of quantitative assessment of albuminuria is not cost-effective (35 US dollars versus 60 US dollars/patient/year). In conclusion, to exclude MA, the CM used in the random spot urine is reliable and easy to handle, but positive screening results of > or =30 mg albumin/l must be confirmed by analyses in the timed overnight collected urine. Although the screening compliance is improved, in terms of analysing random morning spot urine for MA, we cannot recommend CM in a paediatric diabetic outpatient setting because the specificity is far too low.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We study a real-world scheduling problem arising in the context of a rolling ingots production. First we review the production process and discuss peculiarities that have to be observed when scheduling a given set of production orders on the production facilities. We then show how to model this scheduling problem using prescribed time lags between operations, different kinds of resources, and sequence-dependent changeovers. A branch-and-bound solution procedure is presented in the second part. The basic principle is to relax the resource constraints by assuming infinite resource availability. Resulting resource conflicts are then stepwise resolved by introducing precedence relationships among operations competing for the same resources. The algorithm has been implemented as a beam search heuristic enumerating alternative sets of precedence relationships.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A patient classification system was developed integrating a patient acuity instrument with a computerized nursing distribution method based on a linear programming model. The system was designed for real-time measurement of patient acuity (workload) and allocation of nursing personnel to optimize the utilization of resources.^ The acuity instrument was a prototype tool with eight categories of patients defined by patient severity and nursing intensity parameters. From this tool, the demand for nursing care was defined in patient points with one point equal to one hour of RN time. Validity and reliability of the instrument was determined as follows: (1) Content validity by a panel of expert nurses; (2) predictive validity through a paired t-test analysis of preshift and postshift categorization of patients; (3) initial reliability by a one month pilot of the instrument in a practice setting; and (4) interrater reliability by the Kappa statistic.^ The nursing distribution system was a linear programming model using a branch and bound technique for obtaining integer solutions. The objective function was to minimize the total number of nursing personnel used by optimally assigning the staff to meet the acuity needs of the units. A penalty weight was used as a coefficient of the objective function variables to define priorities for allocation of staff.^ The demand constraints were requirements to meet the total acuity points needed for each unit and to have a minimum number of RNs on each unit. Supply constraints were: (1) total availability of each type of staff and the value of that staff member (value was determined relative to that type of staff's ability to perform the job function of an RN (i.e., value for eight hours RN = 8 points, LVN = 6 points); (2) number of personnel available for floating between units.^ The capability of the model to assign staff quantitatively and qualitatively equal to the manual method was established by a thirty day comparison. Sensitivity testing demonstrated appropriate adjustment of the optimal solution to changes in penalty coefficients in the objective function and to acuity totals in the demand constraints.^ Further investigation of the model documented: correct adjustment of assignments in response to staff value changes; and cost minimization by an addition of a dollar coefficient to the objective function. ^

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper addresses the issue of fully automatic segmentation of a hip CT image with the goal to preserve the joint structure for clinical applications in hip disease diagnosis and treatment. For this purpose, we propose a Multi-Atlas Segmentation Constrained Graph (MASCG) method. The MASCG method uses multi-atlas based mesh fusion results to initialize a bone sheetness based multi-label graph cut for an accurate hip CT segmentation which has the inherent advantage of automatic separation of the pelvic region from the bilateral proximal femoral regions. We then introduce a graph cut constrained graph search algorithm to further improve the segmentation accuracy around the bilateral hip joint regions. Taking manual segmentation as the ground truth, we evaluated the present approach on 30 hip CT images (60 hips) with a 15-fold cross validation. When the present approach was compared to manual segmentation, an average surface distance error of 0.30 mm, 0.29 mm, and 0.30 mm was found for the pelvis, the left proximal femur, and the right proximal femur, respectively. A further look at the bilateral hip joint regions demonstrated an average surface distance error of 0.16 mm, 0.21 mm and 0.20 mm for the acetabulum, the left femoral head, and the right femoral head, respectively.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In population studies, most current methods focus on identifying one outcome-related SNP at a time by testing for differences of genotype frequencies between disease and healthy groups or among different population groups. However, testing a great number of SNPs simultaneously has a problem of multiple testing and will give false-positive results. Although, this problem can be effectively dealt with through several approaches such as Bonferroni correction, permutation testing and false discovery rates, patterns of the joint effects by several genes, each with weak effect, might not be able to be determined. With the availability of high-throughput genotyping technology, searching for multiple scattered SNPs over the whole genome and modeling their joint effect on the target variable has become possible. Exhaustive search of all SNP subsets is computationally infeasible for millions of SNPs in a genome-wide study. Several effective feature selection methods combined with classification functions have been proposed to search for an optimal SNP subset among big data sets where the number of feature SNPs far exceeds the number of observations. ^ In this study, we take two steps to achieve the goal. First we selected 1000 SNPs through an effective filter method and then we performed a feature selection wrapped around a classifier to identify an optimal SNP subset for predicting disease. And also we developed a novel classification method-sequential information bottleneck method wrapped inside different search algorithms to identify an optimal subset of SNPs for classifying the outcome variable. This new method was compared with the classical linear discriminant analysis in terms of classification performance. Finally, we performed chi-square test to look at the relationship between each SNP and disease from another point of view. ^ In general, our results show that filtering features using harmononic mean of sensitivity and specificity(HMSS) through linear discriminant analysis (LDA) is better than using LDA training accuracy or mutual information in our study. Our results also demonstrate that exhaustive search of a small subset with one SNP, two SNPs or 3 SNP subset based on best 100 composite 2-SNPs can find an optimal subset and further inclusion of more SNPs through heuristic algorithm doesn't always increase the performance of SNP subsets. Although sequential forward floating selection can be applied to prevent from the nesting effect of forward selection, it does not always out-perform the latter due to overfitting from observing more complex subset states. ^ Our results also indicate that HMSS as a criterion to evaluate the classification ability of a function can be used in imbalanced data without modifying the original dataset as against classification accuracy. Our four studies suggest that Sequential Information Bottleneck(sIB), a new unsupervised technique, can be adopted to predict the outcome and its ability to detect the target status is superior to the traditional LDA in the study. ^ From our results we can see that the best test probability-HMSS for predicting CVD, stroke,CAD and psoriasis through sIB is 0.59406, 0.641815, 0.645315 and 0.678658, respectively. In terms of group prediction accuracy, the highest test accuracy of sIB for diagnosing a normal status among controls can reach 0.708999, 0.863216, 0.639918 and 0.850275 respectively in the four studies if the test accuracy among cases is required to be not less than 0.4. On the other hand, the highest test accuracy of sIB for diagnosing a disease among cases can reach 0.748644, 0.789916, 0.705701 and 0.749436 respectively in the four studies if the test accuracy among controls is required to be at least 0.4. ^ A further genome-wide association study through Chi square test shows that there are no significant SNPs detected at the cut-off level 9.09451E-08 in the Framingham heart study of CVD. Study results in WTCCC can only detect two significant SNPs that are associated with CAD. In the genome-wide study of psoriasis most of top 20 SNP markers with impressive classification accuracy are also significantly associated with the disease through chi-square test at the cut-off value 1.11E-07. ^ Although our classification methods can achieve high accuracy in the study, complete descriptions of those classification results(95% confidence interval or statistical test of differences) require more cost-effective methods or efficient computing system, both of which can't be accomplished currently in our genome-wide study. We should also note that the purpose of this study is to identify subsets of SNPs with high prediction ability and those SNPs with good discriminant power are not necessary to be causal markers for the disease.^

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The aim of this paper is to develop a probabilistic modeling framework for the segmentation of structures of interest from a collection of atlases. Given a subset of registered atlases into the target image for a particular Region of Interest (ROI), a statistical model of appearance and shape is computed for fusing the labels. Segmentations are obtained by minimizing an energy function associated with the proposed model, using a graph-cut technique. We test different label fusion methods on publicly available MR images of human brains.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

El proyecto describe el diseño de un túnel ferroviario en la zona sur de España, entre las localidades de Fuengirola y Marbella en la provincia de Málaga. La ejecución de las obras incluidas en este proyecto permitirá conectar mediante una nueva línea ferroviaria dicha Estación con la población de Marbella y en un futuro con más poblaciones del Corredor Ferroviario de la Costa del Sol que en este momento no cuenta con ninguna conexión ferroviaria. En la geometría se concreta la sección del túnel que se va a ejecutar. Se utiliza como referencia el estudio geológico y geotécnico con el fin de caracterizar los materiales que se van a encontrar en la excavación del túnel. Una vez concluida esta fase se determinan los sostenimientos de acuerdo con las clasificaciones geomecánicas. Realizando el tramo elegido por el Método “Cut and Cover”, y siendo objeto de este proyecto todo el proceso constructivo del mismo así como otras obras complementarias que se tendrán que llevar a cabo debido al paso de dicho túnel por zonas transitadas o habitadas actualmente. Abstract Project describes the design of a high speed railway tunnel in the sur of Spain, between the villages of Fuengirola and Marbella (Málaga). The execution of the constructions incluyed in this proyect will allow the conection, by a neww railway line, this station with Marbella and,in the future, with more village of the “Corredor Ferroviario of Costa del Sol”, witcth al presente hasn´t gotany rail connection.It be can observed in the section of the tunnel that is going to be performed. Geological and geotechnical studies are taken into account in order to classify the rock mass materials involved in the construction process. Afterwards, supports are defined according to geomechanical classifications, specifying four different sections. Making the stretch chosen by “Cut and Cover” method and being the main goal of this proyect all the constructive process of this, as well as other complementary works that will be executed of the crossing of this tunnel by accessible or inhabited areas currently.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

"January 1980."

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Mode of access: Internet.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Mode of access: Internet.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Mode of access: Internet.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Previous research into formulaic language has focussed on specialised groups of people (e.g. L1 acquisition by infants and adult L2 acquisition) with ordinary adult native speakers of English receiving less attention. Additionally, whilst some features of formulaic language have been used as evidence of authorship (e.g. the Unabomber’s use of you can’t eat your cake and have it too) there has been no systematic investigation into this as a potential marker of authorship. This thesis reports the first full-scale study into the use of formulaic sequences by individual authors. The theory of formulaic language hypothesises that formulaic sequences contained in the mental lexicon are shaped by experience combined with what each individual has found to be communicatively effective. Each author’s repertoire of formulaic sequences should therefore differ. To test this assertion, three automated approaches to the identification of formulaic sequences are tested on a specially constructed corpus containing 100 short narratives. The first approach explores a limited subset of formulaic sequences using recurrence across a series of texts as the criterion for identification. The second approach focuses on a word which frequently occurs as part of formulaic sequences and also investigates alternative non-formulaic realisations of the same semantic content. Finally, a reference list approach is used. Whilst claiming authority for any reference list can be difficult, the proposed method utilises internet examples derived from lists prepared by others, a procedure which, it is argued, is akin to asking large groups of judges to reach consensus about what is formulaic. The empirical evidence supports the notion that formulaic sequences have potential as a marker of authorship since in some cases a Questioned Document was correctly attributed. Although this marker of authorship is not universally applicable, it does promise to become a viable new tool in the forensic linguist’s tool-kit.