913 resultados para Weighted by Sum Assured
Resumo:
Purpose: The measurement of broadband ultrasonic attenuation (BUA) in cancellous bone for the assessment of osteoporosis follows a parabolic-type dependence with bone volume fraction; having minima values corresponding to both entire bone and entire marrow. Langton has recently proposed that the primary BUA mechanism may be significant phase interference due to variations in propagation transit time through the test sample as detected over the phase-sensitive surface of the receive ultrasound transducer. This fundamentally simple concept assumes that the propagation of ultrasound through a complex solid : liquid composite sample such as cancellous bone may be considered by an array of parallel ‘sonic rays’. The transit time of each ray is defined by the proportion of bone and marrow propagated, being a minimum (tmin) solely through bone and a maximum (tmax) solely through marrow. A Transit Time Spectrum (TTS), ranging from tmin to tmax, may be defined describing the proportion of sonic rays having a particular transit time, effectively describing lateral inhomogeneity of transit time over the surface of the receive ultrasound transducer. Phase interference may result from interaction of ‘sonic rays’ of differing transit times. The aim of this study was to test the hypothesis that there is a dependence of phase interference upon the lateral inhomogenity of transit time by comparing experimental measurements and computer simulation predictions of ultrasound propagation through a range of relatively simplistic solid:liquid models exhibiting a range of lateral inhomogeneities. Methods: A range of test models was manufactured using acrylic and water as surrogates for bone and marrow respectively. The models varied in thickness in one dimension normal to the direction of propagation, hence exhibiting a range of transit time lateral inhomogeneities, ranging from minimal (single transit time) to maximal (wedge; ultimately the limiting case where each sonic ray has a unique transit time). For the experimental component of the study, two unfocused 1 MHz ¾” broadband diameter transducers were utilized in transmission mode; ultrasound signals were recorded for each of the models. The computer simulation was performed with Matlab, where the transit time and relative amplitude of each sonic ray was calculated. The transit time for each sonic ray was defined as the sum of transit times through acrylic and water components. The relative amplitude considered the reception area for each sonic ray along with absorption in the acrylic. To replicate phase-sensitive detection, all sonic rays were summed and the output signal plotted in comparison with the experimentally derived output signal. Results: From qualtitative and quantitative comparison of the experimental and computer simulation results, there is an extremely high degree of agreement of 94.2% to 99.0% between the two approaches, supporting the concept that propagation of an ultrasound wave, for the models considered, may be approximated by a parallel sonic ray model where the transit time of each ray is defined by the proportion of ‘bone’ and ‘marrow’. Conclusions: This combined experimental and computer simulation study has successfully demonstrated that lateral inhomogeneity of transit time has significant potential for phase interference to occur if a phase-sensitive ultrasound receive transducer is implemented as in most commercial ultrasound bone analysis devices.
Resumo:
Aims: This paper describes the development of a risk adjustment (RA) model predictive of individual lesion treatment failure in percutaneous coronary interventions (PCI) for use in a quality monitoring and improvement program. Methods and results: Prospectively collected data for 3972 consecutive revascularisation procedures (5601 lesions) performed between January 2003 and September 2011 were studied. Data on procedures to September 2009 (n = 3100) were used to identify factors predictive of lesion treatment failure. Factors identified included lesion risk class (p < 0.001), occlusion type (p < 0.001), patient age (p = 0.001), vessel system (p < 0.04), vessel diameter (p < 0.001), unstable angina (p = 0.003) and presence of major cardiac risk factors (p = 0.01). A Bayesian RA model was built using these factors with predictive performance of the model tested on the remaining procedures (area under the receiver operating curve: 0.765, Hosmer–Lemeshow p value: 0.11). Cumulative sum, exponentially weighted moving average and funnel plots were constructed using the RA model and subjectively evaluated. Conclusion: A RA model was developed and applied to SPC monitoring for lesion failure in a PCI database. If linked to appropriate quality improvement governance response protocols, SPC using this RA tool might improve quality control and risk management by identifying variation in performance based on a comparison of observed and expected outcomes.
Resumo:
The detection and correction of defects remains among the most time consuming and expensive aspects of software development. Extensive automated testing and code inspections may mitigate their effect, but some code fragments are necessarily more likely to be faulty than others, and automated identification of fault prone modules helps to focus testing and inspections, thus limiting wasted effort and potentially improving detection rates. However, software metrics data is often extremely noisy, with enormous imbalances in the size of the positive and negative classes. In this work, we present a new approach to predictive modelling of fault proneness in software modules, introducing a new feature representation to overcome some of these issues. This rank sum representation offers improved or at worst comparable performance to earlier approaches for standard data sets, and readily allows the user to choose an appropriate trade-off between precision and recall to optimise inspection effort to suit different testing environments. The method is evaluated using the NASA Metrics Data Program (MDP) data sets, and performance is compared with existing studies based on the Support Vector Machine (SVM) and Naïve Bayes (NB) Classifiers, and with our own comprehensive evaluation of these methods.
Resumo:
The sum of k mins protocol was proposed by Hopper and Blum as a protocol for secure human identification. The goal of the protocol is to let an unaided human securely authenticate to a remote server. The main ingredient of the protocol is the sum of k mins problem. The difficulty of solving this problem determines the security of the protocol. In this paper, we show that the sum of k mins problem is NP-Complete and W[1]-Hard. This latter notion relates to fixed parameter intractability. We also discuss the use of the sum of k mins protocol in resource-constrained devices.
Resumo:
This paper estimates the benefit of a plan for information providing system on road administration by WebGIS. The system will reduce travel costs of visitors from their business establishments to a road administration section of a city office. The authors had individual interviews with the visitors at the section of the Ichikawa City Office. Annual total sum of travel costs was estimated at 37 million yen at most. This paper also proposes formulas which expect the frequency of visits or the total sum of travel costs from the spatial distribution of the business establishments without questionnaires.
Resumo:
Twenty-three non-methane hydrocarbons were captured from the exhaust of a car operating on unleaded petrol (ULP) and 10% ethanol fuels at steady speed on a chassis dynamometer. The compounds were identified and quantified by GC/MS/FID and their emission concentrations at 60 km/h, 80km/h and idle speed were evaluated. The most abundant compounds in the exhaust included n-hexane, n-heptane, benzene, toluene, ethyl benzene, m- and p-xylenes, and methylcyclopentane. Because of the large number of compounds involved, no attempt was made to compare the emission concentrations of the compounds. Rather the sum of the emission concentrations for the suite of compounds identified was compared when the car was powered by ULP and 10% ethanol fuel. It was evident from the results that the emission concentrations and factors were generally higher with ULP than with 10% ethanol fuel. The total emission concentrations with the ULP fuel were 2.8, 4.2 and 2.6 times the corresponding values for the 10% ethanol fuel at 60km/h, 80km/h and idle speed, respectively. The implications of the results on the environment are discussed in the paper.
Resumo:
Purpose of this paper This research aims to examine the effects of inadequate documentation to the cost management & tendering processes in Managing Contractor Contracts using Fixed Lump Sum as a benchmark. Design/methodology/approach A questionnaire survey was conducted with industry practitioners to solicit their views on documentation quality issues associated with the construction industry. This is followed by a series of semi-structured interviews with a purpose of validating survey findings. Findings and value The results showed that documentation quality remains a significant issue, contributing to the industries inefficiency and poor reputation. The level of satisfaction for individual attributes of documentation quality varies. Attributes that do appear to be affected by the choice of procurement method include coordination, build ability, efficiency, completeness and delivery time. Similarly the use and effectiveness of risk mitigation techniques appears to vary between the methods, based on a number of factors such as documentation completeness, early involvement, fast tracking etc. Originality/value of paper This research fills the gap of existing body of knowledge in terms of limited studies on the choice of a project procurement system has an influence on the documentation quality and the level of impact. Conclusions Ultimately research concludes that the entire project team including the client and designers should carefully consider the individual projects requirements and compare those to the trade-offs associated with documentation quality and the procurement method. While documentation quality is definitely an issue to be improved upon, by identifying the projects performance requirements a procurement method can be chosen to maximise the likelihood that those requirements will be met. This allows the aspects of documentation quality considered most important to the individual project to be managed appropriately.
Labeling white matter tracts in hardi by fusing multiple tract atlases with applications to genetics
Resumo:
Accurate identification of white matter structures and segmentation of fibers into tracts is important in neuroimaging and has many potential applications. Even so, it is not trivial because whole brain tractography generates hundreds of thousands of streamlines that include many false positive fibers. We developed and tested an automatic tract labeling algorithm to segment anatomically meaningful tracts from diffusion weighted images. Our multi-atlas method incorporates information from multiple hand-labeled fiber tract atlases. In validations, we showed that the method outperformed the standard ROI-based labeling using a deformable, parcellated atlas. Finally, we show a high-throughput application of the method to genetic population studies. We use the sub-voxel diffusion information from fibers in the clustered tracts based on 105-gradient HARDI scans of 86 young normal twins. The whole workflow shows promise for larger population studies in the future.
Resumo:
Automatic labeling of white matter fibres in diffusion-weighted brain MRI is vital for comparing brain integrity and connectivity across populations, but is challenging. Whole brain tractography generates a vast set of fibres throughout the brain, but it is hard to cluster them into anatomically meaningful tracts, due to wide individual variations in the trajectory and shape of white matter pathways. We propose a novel automatic tract labeling algorithm that fuses information from tractography and multiple hand-labeled fibre tract atlases. As streamline tractography can generate a large number of false positive fibres, we developed a top-down approach to extract tracts consistent with known anatomy, based on a distance metric to multiple hand-labeled atlases. Clustering results from different atlases were fused, using a multi-stage fusion scheme. Our "label fusion" method reliably extracted the major tracts from 105-gradient HARDI scans of 100 young normal adults. © 2012 Springer-Verlag.
Resumo:
The efficiency with which a small beam trawl (1 x 0.5 m mouth) sampled postlarvae and juveniles of tiger prawns Penaeus esculentus and P, semisulcatus at night was estimated in 3 tropical seagrass communities (dominated by Thalassia hemprichii, Syringodium isoetifolium and Enhalus acoroides, respectively) in the shallow waters of the Gulf of Carpentaria in northern Australia. An area of seagrass (40 x 3 m) was enclosed by a net and the beam trawl was repeatedly hand-hauled over the substrate. Net efficiency (q) was calculated using 4 methods: the unweighted Leslie, weighted Leslie, DeLury and Maximum-likelihood (ML) methods. The Maximum-likelihood is the preferred method for estimating efficiency because it makes the fewest assumptions and is not affected by zero catches. The major difference in net efficiencies was between postlarvae (mean ML q +/- 95% confidence limits = 0.66 +/- 0.16) and juveniles of both species (mean q for juveniles in water less than or equal to 1.0 m deep = 0.47 +/- 0.05), i.e. the beam trawl was more efficient at capturing postlarvae than juveniles. There was little difference in net efficiency for P, esculentus between seagrass types (T, hemprichii versus S. isoetifolium), even though the biomass and morphologies of seagrass in these communities differed greatly (biomasses were 54 and 204 g m(-2), respectively). The efficiency of the net appeared to be the same for juveniles of the 2 species in shallow water, but was lower for juvenile P, semisulcatus at high tide when the water was deeper (1.6 to 1.9 m) (0.35 +/- 0.08). The lower efficiency near the time of high tide is possibly because the prawns are more active at high than low tide, and can also escape above the net. Factors affecting net efficiency and alternative methods of estimating net efficiency are discussed.
Resumo:
The Wilson coefficient corresponding to the gluon-field strength GμνGμν is evaluated for the nucleon current correlation function in the presence of a static external electromagnetic field, using a regulator mass Λ to separate the high-momentum part of the Feynman diagrams. The magnetic-moment sum rules are analyzed by two different methods and the sensitivity of the results to variations in Λ are discussed.
Resumo:
In the modern business environment, meeting due dates and avoiding delay penalties are very important goals that can be accomplished by minimizing total weighted tardiness. We consider a scheduling problem in a system of parallel processors with the objective of minimizing total weighted tardiness. Our aim in the present work is to develop an efficient algorithm for solving the parallel processor problem as compared to the available heuristics in the literature and we propose the ant colony optimization approach for this problem. An extensive experimentation is conducted to evaluate the performance of the ACO approach on different problem sizes with the varied tardiness factors. Our experimentation shows that the proposed ant colony optimization algorithm is giving promising results compared to the best of the available heuristics.
Resumo:
Marker ordering during linkage map construction is a critical component of QTL mapping research. In recent years, high-throughput genotyping methods have become widely used, and these methods may generate hundreds of markers for a single mapping population. This poses problems for linkage analysis software because the number of possible marker orders increases exponentially as the number of markers increases. In this paper, we tested the accuracy of linkage analyses on simulated recombinant inbred line data using the commonly used Map Manager QTX (Manly et al. 2001: Mammalian Genome 12, 930-932) software and RECORD (Van Os et al. 2005: Theoretical and Applied Genetics 112, 30-40). Accuracy was measured by calculating two scores: % correct marker positions, and a novel, weighted rank-based score derived from the sum of absolute values of true minus observed marker ranks divided by the total number of markers. The accuracy of maps generated using Map Manager QTX was considerably lower than those generated using RECORD. Differences in linkage maps were often observed when marker ordering was performed several times using the identical dataset. In order to test the effect of reducing marker numbers on the stability of marker order, we pruned marker datasets focusing on regions consisting of tightly linked clusters of markers, which included redundant markers. Marker pruning improved the accuracy and stability of linkage maps because a single unambiguous marker order was produced that was consistent across replications of analysis. Marker pruning was also applied to a real barley mapping population and QTL analysis was performed using different map versions produced by the different programs. While some QTLs were identified with both map versions, there were large differences in QTL mapping results. Differences included maximum LOD and R-2 values at QTL peaks and map positions, thus highlighting the importance of marker order for QTL mapping
Resumo:
The relationship between major depressive disorder (MDD) and bipolar disorder (BD) remains controversial. Previous research has reported differences and similarities in risk factors for MDD and BD, such as predisposing personality traits. For example, high neuroticism is related to both disorders, whereas openness to experience is specific for BD. This study examined the genetic association between personality and MDD and BD by applying polygenic scores for neuroticism, extraversion, openness to experience, agreeableness and conscientiousness to both disorders. Polygenic scores reflect the weighted sum of multiple single-nucleotide polymorphism alleles associated with the trait for an individual and were based on a meta-analysis of genome-wide association studies for personality traits including 13,835 subjects. Polygenic scores were tested for MDD in the combined Genetic Association Information Network (GAIN-MDD) and MDD2000+ samples (N=8921) and for BD in the combined Systematic Treatment Enhancement Program for Bipolar Disorder and Wellcome Trust Case-Control Consortium samples (N=6329) using logistic regression analyses. At the phenotypic level, personality dimensions were associated with MDD and BD. Polygenic neuroticism scores were significantly positively associated with MDD, whereas polygenic extraversion scores were significantly positively associated with BD. The explained variance of MDD and BD, approximately 0.1%, was highly comparable to the variance explained by the polygenic personality scores in the corresponding personality traits themselves (between 0.1 and 0.4%). This indicates that the proportions of variance explained in mood disorders are at the upper limit of what could have been expected. This study suggests shared genetic risk factors for neuroticism and MDD on the one hand and for extraversion and BD on the other.
Resumo:
The magnetic moment μB of a baryon B with quark content (aab) is written as μB=4ea(1+δB)eħ/2cMB, where ea is the charge of the quark of flavor type a. The experimental values of δB have a simple pattern and have a natural explanation within QCD. Using the ratio method, the QCD sum rules are analyzed and the values of δB are computed. We find good agreement with data (≊10%) for the nucleons and the Σ multiplet while for the cascade the agreement is not as good. In our analysis we have incorporated additional terms in the operator-product expansion as compared to previous authors. We also clarify some points of disagreement between the previous authors. External-field-induced correlations describing the magnetic properties of the vacuum are estimated from the baryon magnetic-moment sum rules themselves as well as by independent spectral representations and the results are contrasted.