967 resultados para Operational Calculus
Resumo:
The World Health Organization (WHO) criteria for the diagnosis of osteoporosis are mainly applicable for dual X-ray absorptiometry (DXA) measurements at the spine and hip levels. There is a growing demand for cheaper devices, free of ionizing radiation such as promising quantitative ultrasound (QUS). In common with many other countries, QUS measurements are increasingly used in Switzerland without adequate clinical guidelines. The T-score approach developed for DXA cannot be applied to QUS, although well-conducted prospective studies have shown that ultrasound could be a valuable predictor of fracture risk. As a consequence, an expert committee named the Swiss Quality Assurance Project (SQAP, for which the main mission is the establishment of quality assurance procedures for DXA and QUS in Switzerland) was mandated by the Swiss Association Against Osteoporosis (ASCO) in 2000 to propose operational clinical recommendations for the use of QUS in the management of osteoporosis for two QUS devices sold in Switzerland. Device-specific weighted "T-score" based on the risk of osteoporotic hip fractures as well as on the prediction of DXA osteoporosis at the hip, according to the WHO definition of osteoporosis, were calculated for the Achilles (Lunar, General Electric, Madison, Wis.) and Sahara (Hologic, Waltham, Mass.) ultrasound devices. Several studies (totaling a few thousand subjects) were used to calculate age-adjusted odd ratios (OR) and area under the receiver operating curve (AUC) for the prediction of osteoporotic fracture (taking into account a weighting score depending on the design of the study involved in the calculation). The ORs were 2.4 (1.9-3.2) and AUC 0.72 (0.66-0.77), respectively, for the Achilles, and 2.3 (1.7-3.1) and 0.75 (0.68-0.82), respectively, for the Sahara device. To translate risk estimates into thresholds for clinical application, 90% sensitivity was used to define low fracture and low osteoporosis risk, and a specificity of 80% was used to define subjects as being at high risk of fracture or having osteoporosis at the hip. From the combination of the fracture model with the hip DXA osteoporotic model, we found a T-score threshold of -1.2 and -2.5 for the stiffness (Achilles) determining, respectively, the low- and high-risk subjects. Similarly, we found a T-score at -1.0 and -2.2 for the QUI index (Sahara). Then a screening strategy combining QUS, DXA, and clinical factors for the identification of women needing treatment was proposed. The application of this approach will help to minimize the inappropriate use of QUS from which the whole field currently suffers.
Resumo:
Résumé Le μ-calcul est une extension de la logique modale par des opérateurs de point fixe. Dans ce travail nous étudions la complexité de certains fragments de cette logique selon deux points de vue, différents mais étroitement liés: l'un syntaxique (ou combinatoire) et l'autre topologique. Du point de vue syn¬taxique, les propriétés définissables dans ce formalisme sont classifiées selon la complexité combinatoire des formules de cette logique, c'est-à-dire selon le nombre d'alternances des opérateurs de point fixe. Comparer deux ensembles de modèles revient ainsi à comparer la complexité syntaxique des formules as¬sociées. Du point de vue topologique, les propriétés définissables dans cette logique sont comparées à l'aide de réductions continues ou selon leurs positions dans la hiérarchie de Borel ou dans celle projective. Dans la première partie de ce travail nous adoptons le point de vue syntax¬ique afin d'étudier le comportement du μ-calcul sur des classes restreintes de modèles. En particulier nous montrons que: (1) sur la classe des modèles symétriques et transitifs le μ-calcul est aussi expressif que la logique modale; (2) sur la classe des modèles transitifs, toute propriété définissable par une formule du μ-calcul est définissable par une formule sans alternance de points fixes, (3) sur la classe des modèles réflexifs, il y a pour tout η une propriété qui ne peut être définie que par une formule du μ-calcul ayant au moins η alternances de points fixes, (4) sur la classe des modèles bien fondés et transitifs le μ-calcul est aussi expressif que la logique modale. Le fait que le μ-calcul soit aussi expressif que la logique modale sur la classe des modèles bien fondés et transitifs est bien connu. Ce résultat est en ef¬fet la conséquence d'un théorème de point fixe prouvé indépendamment par De Jongh et Sambin au milieu des années 70. La preuve que nous donnons de l'effondrement de l'expressivité du μ-calcul sur cette classe de modèles est néanmoins indépendante de ce résultat. Par la suite, nous étendons le langage du μ-calcul en permettant aux opérateurs de point fixe de lier des occurrences négatives de variables libres. En montrant alors que ce formalisme est aussi ex¬pressif que le fragment modal, nous sommes en mesure de fournir une nouvelle preuve du théorème d'unicité des point fixes de Bernardi, De Jongh et Sambin et une preuve constructive du théorème d'existence de De Jongh et Sambin. RÉSUMÉ Pour ce qui concerne les modèles transitifs, du point de vue topologique cette fois, nous prouvons que la logique modale correspond au fragment borélien du μ-calcul sur cette classe des systèmes de transition. Autrement dit, nous vérifions que toute propriété définissable des modèles transitifs qui, du point de vue topologique, est une propriété borélienne, est nécessairement une propriété modale, et inversement. Cette caractérisation du fragment modal découle du fait que nous sommes en mesure de montrer que, modulo EF-bisimulation, un ensemble d'arbres est définissable dans la logique temporelle Ε F si et seulement il est borélien. Puisqu'il est possible de montrer que ces deux propriétés coïncident avec une caractérisation effective de la définissabilité dans la logique Ε F dans le cas des arbres à branchement fini donnée par Bojanczyk et Idziaszek [24], nous obtenons comme corollaire leur décidabilité. Dans une deuxième partie, nous étudions la complexité topologique d'un sous-fragment du fragment sans alternance de points fixes du μ-calcul. Nous montrons qu'un ensemble d'arbres est définissable par une formule de ce frag¬ment ayant au moins η alternances si et seulement si cette propriété se trouve au moins au n-ième niveau de la hiérarchie de Borel. Autrement dit, nous vérifions que pour ce fragment du μ-calcul, les points de vue topologique et combina- toire coïncident. De plus, nous décrivons une procédure effective capable de calculer pour toute propriété définissable dans ce langage sa position dans la hiérarchie de Borel, et donc le nombre d'alternances de points fixes nécessaires à la définir. Nous nous intéressons ensuite à la classification des ensembles d'arbres par réduction continue, et donnons une description effective de l'ordre de Wadge de la classe des ensembles d'arbres définissables dans le formalisme considéré. En particulier, la hiérarchie que nous obtenons a une hauteur (ωω)ω. Nous complétons ces résultats en décrivant un algorithme permettant de calculer la position dans cette hiérarchie de toute propriété définissable.
Resumo:
Executive Summary
Resumo:
The study assessed the operational feasibility and acceptability of insecticide-treated mosquito nets (ITNs) in one Primary Health Centre (PHC) in a falciparum malaria endemic district in the state of Orissa, India, where 74% of the people are tribes and DDT indoor residual spraying had been withdrawn and ITNs introduced by the National Vector Borne Disease Control Programme. To a population of 63,920, 24,442 ITNs were distributed free of charge through 101 treatment centers during July-August 2002. Interview of 1,130, 1,012 and 126 respondents showed that the net use rates were 80%, 74% and 55% in the cold, rainy and summer seasons, respectively. Since using ITNs, 74.5-76.6% of the respondents observed reduction of mosquito bites and 7.2-32.1% reduction of malaria incidence; 37% expressed willingness to buy ITNs if the cost was lower and they were affordable. Up to ten months post-treatment, almost 100% mortality of vector mosquitoes was recorded on unwashed and washed nets (once or twice). Health workers re-treated the nets at the treatment centers eight months after distribution on a cost-recovery basis. The coverage reported by the PHC was only 4.2%, mainly because of unwillingness of the people to pay for re-treatment and to go to the treatment centers from their villages. When the re-treatment was continued at the villages involving personnel from several departments, the coverage improved to about 90%.Interview of 126 respondents showed that among those who got their nets re-treated, 81.4% paid cash for the re-treatment and the remainder were reluctant to pay. Majority of those who paid said that they did so due to the fear that if they did not do so they would lose benefits from other government welfare schemes. The 2nd re-treatment was therefore carried out free of charge nine months after the 1st re-treatment and thus achieved coverage of 70.4%. The study showed community acceptance to use ITNs as they perceived the benefit. Distribution and re-treatment of nets was thus possible through the PHC system, if done free of charge and when personnel from different departments, especially those at village level, were involved.
Resumo:
ABSTRACT This paper provides evidence on the market reaction to corporate investment decisions whose shareholder value is largely attributed to growth options. The exploratory research raised pre-operational companies and their operational pairs on the same economy segments. It had the purpose of investigating the existence of statistical differentiation from financial indicators that reflect the installed assets and growth assets, and then study the market reaction to changes in fixed assets as a signaling element about investment decisions. The formation process of operational assets and shareholder value almost exclusively dependent on asset growth stands out in the pre-operational companies. As a result, differentiation tests confirmed that the pre-operational companies had their value especially derived on growth options. The market reaction was particularly bigger in pre-operational companies with abnormal negative stock returns, while the operational companies had positive returns, which may indicate that the quality of the investment is judged based on the financial disclosure. Additionally, operational companies' investors await the disclosure to adjust their prices. We conclude that the results are consistent with the empirical evidence and the participants in financial markets to long-term capital formation investments should give that special attention.
Resumo:
Theory predicts that males adapt to sperm competition by increasing their investment in testis mass to transfer larger ejaculates. Experimental and comparative data support this prediction. Nevertheless, the relative importance of sperm competition in testis size evolution remains elusive, because experiments vary only sperm competition whereas comparative approaches confound it with other variables, in particular male mating rate. We addressed the relative importance of sperm competition and male mating rate by taking an experimental evolution approach. We subjected populations of Drosophila melanogaster to sex ratios of 1:1, 4:1, and 10:1 (female:male). Female bias decreased sperm competition but increased male mating rate and sperm depletion. After 28 generations of evolution, males from the 10:1 treatment had larger testes than males from other treatments. Thus, testis size evolved in response to mating rate and sperm depletion, not sperm competition. Furthermore, our experiment demonstrated that drift associated with sex ratio distortion limits adaptation; testis size only evolved in populations in which the effect of sex ratio bias on the effective population size had been compensated by increasing the numerical size. We discuss these results with respect to reproductive evolution, genetic drift in natural and experimental populations, and consequences of natural sex ratio distortion.
Resumo:
We introduce a variation of the proof for weak approximations that issuitable for studying the densities of stochastic processes which areevaluations of the flow generated by a stochastic differential equation on a random variable that maybe anticipating. Our main assumption is that the process and the initial random variable have to be smooth in the Malliavin sense. Furthermore if the inverse of the Malliavin covariance matrix associated with the process under consideration is sufficiently integrable then approximations fordensities and distributions can also be achieved. We apply theseideas to the case of stochastic differential equations with boundaryconditions and the composition of two diffusions.
Resumo:
This article is an introduction to Malliavin Calculus for practitioners.We treat one specific application to the calculation of greeks in Finance.We consider also the kernel density method to compute greeks and anextension of the Vega index called the local vega index.
Resumo:
The research reported in this series of article aimed at (1) automating the search of questioned ink specimens in ink reference collections and (2) at evaluating the strength of ink evidence in a transparent and balanced manner. These aims require that ink samples are analysed in an accurate and reproducible way and that they are compared in an objective and automated way. This latter requirement is due to the large number of comparisons that are necessary in both scenarios. A research programme was designed to (a) develop a standard methodology for analysing ink samples in a reproducible way, (b) comparing automatically and objectively ink samples and (c) evaluate the proposed methodology in forensic contexts. This report focuses on the last of the three stages of the research programme. The calibration and acquisition process and the mathematical comparison algorithms were described in previous papers [C. Neumann, P. Margot, New perspectives in the use of ink evidence in forensic science-Part I: Development of a quality assurance process for forensic ink analysis by HPTLC, Forensic Sci. Int. 185 (2009) 29-37; C. Neumann, P. Margot, New perspectives in the use of ink evidence in forensic science- Part II: Development and testing of mathematical algorithms for the automatic comparison of ink samples analysed by HPTLC, Forensic Sci. Int. 185 (2009) 38-50]. In this paper, the benefits and challenges of the proposed concepts are tested in two forensic contexts: (1) ink identification and (2) ink evidential value assessment. The results show that different algorithms are better suited for different tasks. This research shows that it is possible to build digital ink libraries using the most commonly used ink analytical technique, i.e. high-performance thin layer chromatography, despite its reputation of lacking reproducibility. More importantly, it is possible to assign evidential value to ink evidence in a transparent way using a probabilistic model. It is therefore possible to move away from the traditional subjective approach, which is entirely based on experts' opinion, and which is usually not very informative. While there is room for the improvement, this report demonstrates the significant gains obtained over the traditional subjective approach for the search of ink specimens in ink databases, and the interpretation of their evidential value.
Resumo:
The planning effort for ISP began in 2006 when the IDOC retained the Durrant/PBA team of architects and planners to review the Iowa correctional system. The team conducted two studies in the following two years, the first being the April 2007 Iowa Department of Corrections Systemic Master Plan. Both studies addressed myriad aspects of the correctional system including treatment and re-entry needs and programs, security and training, and staffing.
Resumo:
Excessive daytime sleepiness underpins a large number of the reported motor vehicle crashes. Fair and accurate field measures are needed to identify at-risk drivers who have been identified as potentially driving in a sleep deprived state on the basis of erratic driving behavior. The purpose of this research study was to evaluate a set of cognitive tests that can assist Motor Vehicle Enforcement Officers on duty in identifying drivers who may be engaged in sleep impaired driving. Currently no gold standard test exists to judge sleepiness in the field. Previous research has shown that Psychomotor Vigilance Task (PVT) is sensitive to sleep deprivation. The first goal of the current study was to evaluate whether computerized tests of attention and memory, more brief than PVT, would be as sensitive to sleepiness effects. The second goal of the study was to evaluate whether objective and subjective indices of acute and cumulative sleepiness predicted cognitive performance. Findings showed that sleepiness effects were detected in three out of six tasks. Furthermore, PVT was the only task that showed a consistent slowing of both ‘best’, i.e. minimum, and ‘typical’ responses, median RT due to sleepiness. However, PVT failed to show significant associations with objective measures of sleep deprivation (number of hours awake). The findings indicate that sleepiness tests in the field have significant limitations. The findings clearly show that it will not be possible to set absolute performance thresholds to identify sleep-impaired drivers based on cognitive performance on any test. Cooperation with industry to adjust work and rest cycles, and incentives to comply with those regulations will be critical components of a broad policy to prevent sleepy truck drivers from getting on the road.
Resumo:
The research reported in this series of article aimed at (1) automating the search of questioned ink specimens in ink reference collections and (2) at evaluating the strength of ink evidence in a transparent and balanced manner. These aims require that ink samples are analysed in an accurate and reproducible way and that they are compared in an objective and automated way. This latter requirement is due to the large number of comparisons that are necessary in both scenarios. A research programme was designed to (a) develop a standard methodology for analysing ink samples in a reproducible way, (b) comparing automatically and objectively ink samples and (c) evaluate the proposed methodology in forensic contexts. This report focuses on the last of the three stages of the research programme. The calibration and acquisition process and the mathematical comparison algorithms were described in previous papers [C. Neumann, P. Margot, New perspectives in the use of ink evidence in forensic science-Part I: Development of a quality assurance process for forensic ink analysis by HPTLC, Forensic Sci. Int. 185 (2009) 29-37; C. Neumann, P. Margot, New perspectives in the use of ink evidence in forensic science-Part II: Development and testing of mathematical algorithms for the automatic comparison of ink samples analysed by HPTLC, Forensic Sci. Int. 185 (2009) 38-50]. In this paper, the benefits and challenges of the proposed concepts are tested in two forensic contexts: (1) ink identification and (2) ink evidential value assessment. The results show that different algorithms are better suited for different tasks. This research shows that it is possible to build digital ink libraries using the most commonly used ink analytical technique, i.e. high-performance thin layer chromatography, despite its reputation of lacking reproducibility. More importantly, it is possible to assign evidential value to ink evidence in a transparent way using a probabilistic model. It is therefore possible to move away from the traditional subjective approach, which is entirely based on experts' opinion, and which is usually not very informative. While there is room for the improvement, this report demonstrates the significant gains obtained over the traditional subjective approach for the search of ink specimens in ink databases, and the interpretation of their evidential value.
Resumo:
Research projects aimed at proposing fingerprint statistical models based on the likelihood ratio framework have shown that low quality finger impressions left on crime scenes may have significant evidential value. These impressions are currently either not recovered, considered to be of no value when first analyzed by fingerprint examiners, or lead to inconclusive results when compared to control prints. There are growing concerns within the fingerprint community that recovering and examining these low quality impressions will result in a significant increase of the workload of fingerprint units and ultimately of the number of backlogged cases. This study was designed to measure the number of impressions currently not recovered or not considered for examination, and to assess the usefulness of these impressions in terms of the number of additional detections that would result from their examination.