974 resultados para Mean-variance efficiency
Resumo:
PURPOSE: To determine the lower limit of dose reduction with hybrid and fully iterative reconstruction algorithms in detection of endoleaks and in-stent thrombus of thoracic aorta with computed tomographic (CT) angiography by applying protocols with different tube energies and automated tube current modulation. MATERIALS AND METHODS: The calcification insert of an anthropomorphic cardiac phantom was replaced with an aortic aneurysm model containing a stent, simulated endoleaks, and an intraluminal thrombus. CT was performed at tube energies of 120, 100, and 80 kVp with incrementally increasing noise indexes (NIs) of 16, 25, 34, 43, 52, 61, and 70 and a 2.5-mm section thickness. NI directly controls radiation exposure; a higher NI allows for greater image noise and decreases radiation. Images were reconstructed with filtered back projection (FBP) and hybrid and fully iterative algorithms. Five radiologists independently analyzed lesion conspicuity to assess sensitivity and specificity. Mean attenuation (in Hounsfield units) and standard deviation were measured in the aorta to calculate signal-to-noise ratio (SNR). Attenuation and SNR of different protocols and algorithms were analyzed with analysis of variance or Welch test depending on data distribution. RESULTS: Both sensitivity and specificity were 100% for simulated lesions on images with 2.5-mm section thickness and an NI of 25 (3.45 mGy), 34 (1.83 mGy), or 43 (1.16 mGy) at 120 kVp; an NI of 34 (1.98 mGy), 43 (1.23 mGy), or 61 (0.61 mGy) at 100 kVp; and an NI of 43 (1.46 mGy) or 70 (0.54 mGy) at 80 kVp. SNR values showed similar results. With the fully iterative algorithm, mean attenuation of the aorta decreased significantly in reduced-dose protocols in comparison with control protocols at 100 kVp (311 HU at 16 NI vs 290 HU at 70 NI, P ≤ .0011) and 80 kVp (400 HU at 16 NI vs 369 HU at 70 NI, P ≤ .0007). CONCLUSION: Endoleaks and in-stent thrombus of thoracic aorta were detectable to 1.46 mGy (80 kVp) with FBP, 1.23 mGy (100 kVp) with the hybrid algorithm, and 0.54 mGy (80 kVp) with the fully iterative algorithm.
Resumo:
Introduction. The management of large burn victims has significantly improved in the last decades. Specifically autologous cultured keratinocytes (CEA) overcame the problem of limited donor sites in severely burned patients. Several studies testing CEA's in their burn centers give mixed results on the general outcomes of burn patients. Methods. A review of publications with a minimum of 15 patients per study using CEA for the management of severe burn injury from 1989 until 2011 were recruited by using an online database including Medline, Pub Med and the archives of the medical library of the CHUV in Lausanne. Results. 18 studies with a total of 977 patients were included into this review. Most of the studies did not specify if CEA's were grafted alone or in combination with split thickness skin grafts (STSG) although most of the patients seemed to have received both methodologies in reviewed studies. The mean TBSA per study ranged from 33% to 78% in patients that were grafted with CEA's. Here no common minimum TBSA making a patient eligible for CEA grafting could be found. The definition of the "take rate" is not standardized and varied largely from 26% to 73%. Mortality and hospitalization time could not be shown to correlate with CEA use in all of the studies. As late complications, some authors described the fragility of the CEA regenerated skin. Conclusion. Since the healing of large burn victims demands for a variety of different surgical and non-surgical treatment strategies and the final outcome mainly depends on the burned surface as well as the general health condition of the patient, no definitive conclusion could be drawn from the use of CEA's of reviewed studies. From our own experience, we know that selected patients significantly profit from CEA grafts although cost efficiency or the reduction of mortality cannot be demonstrated on this particular cases.
Resumo:
The recently developed variational Wigner-Kirkwood approach is extended to the relativistic mean field theory for finite nuclei. A numerical application to the calculation of the surface energy coefficient in semi-infinite nuclear matter is presented. The new method is contrasted with the standard density functional theory and the fully quantal approach.
Resumo:
Inadequate usage can degrade natural resources, particularly soils. More attention has been paid to practices aiming at the recovery of degraded soils in the last years, e.g, the use of organic fertilizers, liming and introduction of species adapted to adverse conditions. The purpose of this study was therefore to investigate the recovery of physical properties of a Red Latosol (Oxisol) degraded by the construction of a hydroelectric power station. In the study area, a soil layer about 8m thick had been withdrawn by heavy machines leading not only to soil compaction, but resulting in high-degree degradation. The experiment was arranged in a completely randomized design with nine treatments and four replications. The treatments consisted of: 1- soil mobilization by tilling (to ensure the effect of mechanical mobilization in all treatments) without planting, but growth of spontaneous vegetation; 2- Black velvet bean (Stizolobium aterrimum Piper & Tracy); 3- Pigeonpea (Cajanus cajan (L.) DC); 4- Liming + black velvet bean; 5-Liming + pigeonpea until 1994, when replaced by jack bean (Canavalia ensiformis); 6- Liming + gypsum + black velvet bean; 7- Liming + gypsum + pigeonpea until 1994, when replaced by jack bean; and two controls as reference: 8- Native Cerrado vegetation and 9- bare soil (no tilling and no planting), left under natural conditions and in this situation, without spontaneous vegetation. In treatments 1 through 7, the soil was tilled. Treatments were installed in 1992 and left unmanaged for seven years, until brachiaria (Brachiaria decumbens) was planted in all plots in 1999. Seventeen years after implantation, the properties soil macroporosity, microporosity, total porosity, bulk density and aggregate stability were assessed in the previously described treatments in the soil layers 0.00-0.10; 0.10-0.20 and 0.20-0.40 m, and soil Penetration Resistance and soil moisture in 0.00-0.15 and 0.15-0.30 m. The plants were evaluated for: brachiaria dry matter and spontaneous growth of native tree species in the plots as of 2006. Results were analyzed by variance analysis and Tukey´s test at 5 % for mean comparison. In all treatments, except for the bare soil (no recovery measures), ongoing recovery of the degraded soil physical properties was observed. Macroporosity, soil bulk density and total porosity were good soil quality indicators. The occurrence of spontaneous native species indicated the soil recovery process. The best adapted species was Machaerium acutifolium Vogel, with the largest number of plants and most advanced development; the dry matter production of B. decumbens in recovering soil was similar to normal conditions, evidencing soil recovery.
Resumo:
Soil penetration resistance (PR) is a measure of soil compaction closely related to soil structure and plant growth. However, the variability in PR hampers the statistical analyses. This study aimed to evaluate the variability of soil PR on the efficiency of parametric and nonparametric analyses in indentifying significant effects of soil compaction and to classify the coefficient of variation of PR into low, medium, high and very high. On six dates, the PR of a typical dystrophic Red Ultisol under continuous no-tillage for 16 years was measured. Three tillage and/or traffic conditions were established with the application of: (i) no chiseling or additional traffic, (ii) additional compaction, and (iii) chiseling. On each date, the nineteen PR data (measured at every 1.5 cm to a depth of 28.5 cm) were grouped in layers with different thickness. In each layer, the treatment effects were evaluated by variance (ANOVA) and Kruskal-Wallis analyses in a completely randomized design, and the coefficients of variation of all analyses were classified (low, intermediate, high and very high). The ANOVA performed better in discriminating the compaction effects, but the rejection rate of null hypothesis decreased from 100 to 80 % when the coefficient of variation increased from 15 to 26 %. The values of 15 and 26 % were the thresholds separating the low/intermediate and the high/very high coefficient variation classes of PR in this Ultisol.
Resumo:
Measuring school efficiency is a challenging task. First, a performance measurement technique has to be selected. Within Data Envelopment Analysis (DEA), one such technique, alternative models have been developed in order to deal with environmental variables. The majority of these models lead to diverging results. Second, the choice of input and output variables to be included in the efficiency analysis is often dictated by data availability. The choice of the variables remains an issue even when data is available. As a result, the choice of technique, model and variables is probably, and ultimately, a political judgement. Multi-criteria decision analysis methods can help the decision makers to select the most suitable model. The number of selection criteria should remain parsimonious and not be oriented towards the results of the models in order to avoid opportunistic behaviour. The selection criteria should also be backed by the literature or by an expert group. Once the most suitable model is identified, the principle of permanence of methods should be applied in order to avoid a change of practices over time. Within DEA, the two-stage model developed by Ray (1991) is the most convincing model which allows for an environmental adjustment. In this model, an efficiency analysis is conducted with DEA followed by an econometric analysis to explain the efficiency scores. An environmental variable of particular interest, tested in this thesis, consists of the fact that operations are held, for certain schools, on multiple sites. Results show that the fact of being located on more than one site has a negative influence on efficiency. A likely way to solve this negative influence would consist of improving the use of ICT in school management and teaching. Planning new schools should also consider the advantages of being located on a unique site, which allows reaching a critical size in terms of pupils and teachers. The fact that underprivileged pupils perform worse than privileged pupils has been public knowledge since Coleman et al. (1966). As a result, underprivileged pupils have a negative influence on school efficiency. This is confirmed by this thesis for the first time in Switzerland. Several countries have developed priority education policies in order to compensate for the negative impact of disadvantaged socioeconomic status on school performance. These policies have failed. As a result, other actions need to be taken. In order to define these actions, one has to identify the social-class differences which explain why disadvantaged children underperform. Childrearing and literary practices, health characteristics, housing stability and economic security influence pupil achievement. Rather than allocating more resources to schools, policymakers should therefore focus on related social policies. For instance, they could define pre-school, family, health, housing and benefits policies in order to improve the conditions for disadvantaged children.
Resumo:
La présente étude est à la fois une évaluation du processus de la mise en oeuvre et des impacts de la police de proximité dans les cinq plus grandes zones urbaines de Suisse - Bâle, Berne, Genève, Lausanne et Zurich. La police de proximité (community policing) est à la fois une philosophie et une stratégie organisationnelle qui favorise un partenariat renouvelé entre la police et les communautés locales dans le but de résoudre les problèmes relatifs à la sécurité et à l'ordre public. L'évaluation de processus a analysé des données relatives aux réformes internes de la police qui ont été obtenues par l'intermédiaire d'entretiens semi-structurés avec des administrateurs clés des cinq départements de police, ainsi que dans des documents écrits de la police et d'autres sources publiques. L'évaluation des impacts, quant à elle, s'est basée sur des variables contextuelles telles que des statistiques policières et des données de recensement, ainsi que sur des indicateurs d'impacts construit à partir des données du Swiss Crime Survey (SCS) relatives au sentiment d'insécurité, à la perception du désordre public et à la satisfaction de la population à l'égard de la police. Le SCS est un sondage régulier qui a permis d'interroger des habitants des cinq grandes zones urbaines à plusieurs reprises depuis le milieu des années 1980. L'évaluation de processus a abouti à un « Calendrier des activités » visant à créer des données de panel permettant de mesurer les progrès réalisés dans la mise en oeuvre de la police de proximité à l'aide d'une grille d'évaluation à six dimensions à des intervalles de cinq ans entre 1990 et 2010. L'évaluation des impacts, effectuée ex post facto, a utilisé un concept de recherche non-expérimental (observational design) dans le but d'analyser les impacts de différents modèles de police de proximité dans des zones comparables à travers les cinq villes étudiées. Les quartiers urbains, délimités par zone de code postal, ont ainsi été regroupés par l'intermédiaire d'une typologie réalisée à l'aide d'algorithmes d'apprentissage automatique (machine learning). Des algorithmes supervisés et non supervisés ont été utilisés sur les données à haute dimensionnalité relatives à la criminalité, à la structure socio-économique et démographique et au cadre bâti dans le but de regrouper les quartiers urbains les plus similaires dans des clusters. D'abord, les cartes auto-organisatrices (self-organizing maps) ont été utilisées dans le but de réduire la variance intra-cluster des variables contextuelles et de maximiser simultanément la variance inter-cluster des réponses au sondage. Ensuite, l'algorithme des forêts d'arbres décisionnels (random forests) a permis à la fois d'évaluer la pertinence de la typologie de quartier élaborée et de sélectionner les variables contextuelles clés afin de construire un modèle parcimonieux faisant un minimum d'erreurs de classification. Enfin, pour l'analyse des impacts, la méthode des appariements des coefficients de propension (propensity score matching) a été utilisée pour équilibrer les échantillons prétest-posttest en termes d'âge, de sexe et de niveau d'éducation des répondants au sein de chaque type de quartier ainsi identifié dans chacune des villes, avant d'effectuer un test statistique de la différence observée dans les indicateurs d'impacts. De plus, tous les résultats statistiquement significatifs ont été soumis à une analyse de sensibilité (sensitivity analysis) afin d'évaluer leur robustesse face à un biais potentiel dû à des covariables non observées. L'étude relève qu'au cours des quinze dernières années, les cinq services de police ont entamé des réformes majeures de leur organisation ainsi que de leurs stratégies opérationnelles et qu'ils ont noué des partenariats stratégiques afin de mettre en oeuvre la police de proximité. La typologie de quartier développée a abouti à une réduction de la variance intra-cluster des variables contextuelles et permet d'expliquer une partie significative de la variance inter-cluster des indicateurs d'impacts avant la mise en oeuvre du traitement. Ceci semble suggérer que les méthodes de géocomputation aident à équilibrer les covariables observées et donc à réduire les menaces relatives à la validité interne d'un concept de recherche non-expérimental. Enfin, l'analyse des impacts a révélé que le sentiment d'insécurité a diminué de manière significative pendant la période 2000-2005 dans les quartiers se trouvant à l'intérieur et autour des centres-villes de Berne et de Zurich. Ces améliorations sont assez robustes face à des biais dus à des covariables inobservées et covarient dans le temps et l'espace avec la mise en oeuvre de la police de proximité. L'hypothèse alternative envisageant que les diminutions observées dans le sentiment d'insécurité soient, partiellement, un résultat des interventions policières de proximité semble donc être aussi plausible que l'hypothèse nulle considérant l'absence absolue d'effet. Ceci, même si le concept de recherche non-expérimental mis en oeuvre ne peut pas complètement exclure la sélection et la régression à la moyenne comme explications alternatives. The current research project is both a process and impact evaluation of community policing in Switzerland's five major urban areas - Basel, Bern, Geneva, Lausanne, and Zurich. Community policing is both a philosophy and an organizational strategy that promotes a renewed partnership between the police and the community to solve problems of crime and disorder. The process evaluation data on police internal reforms were obtained through semi-structured interviews with key administrators from the five police departments as well as from police internal documents and additional public sources. The impact evaluation uses official crime records and census statistics as contextual variables as well as Swiss Crime Survey (SCS) data on fear of crime, perceptions of disorder, and public attitudes towards the police as outcome measures. The SCS is a standing survey instrument that has polled residents of the five urban areas repeatedly since the mid-1980s. The process evaluation produced a "Calendar of Action" to create panel data to measure community policing implementation progress over six evaluative dimensions in intervals of five years between 1990 and 2010. The impact evaluation, carried out ex post facto, uses an observational design that analyzes the impact of the different community policing models between matched comparison areas across the five cities. Using ZIP code districts as proxies for urban neighborhoods, geospatial data mining algorithms serve to develop a neighborhood typology in order to match the comparison areas. To this end, both unsupervised and supervised algorithms are used to analyze high-dimensional data on crime, the socio-economic and demographic structure, and the built environment in order to classify urban neighborhoods into clusters of similar type. In a first step, self-organizing maps serve as tools to develop a clustering algorithm that reduces the within-cluster variance in the contextual variables and simultaneously maximizes the between-cluster variance in survey responses. The random forests algorithm then serves to assess the appropriateness of the resulting neighborhood typology and to select the key contextual variables in order to build a parsimonious model that makes a minimum of classification errors. Finally, for the impact analysis, propensity score matching methods are used to match the survey respondents of the pretest and posttest samples on age, gender, and their level of education for each neighborhood type identified within each city, before conducting a statistical test of the observed difference in the outcome measures. Moreover, all significant results were subjected to a sensitivity analysis to assess the robustness of these findings in the face of potential bias due to some unobserved covariates. The study finds that over the last fifteen years, all five police departments have undertaken major reforms of their internal organization and operating strategies and forged strategic partnerships in order to implement community policing. The resulting neighborhood typology reduced the within-cluster variance of the contextual variables and accounted for a significant share of the between-cluster variance in the outcome measures prior to treatment, suggesting that geocomputational methods help to balance the observed covariates and hence to reduce threats to the internal validity of an observational design. Finally, the impact analysis revealed that fear of crime dropped significantly over the 2000-2005 period in the neighborhoods in and around the urban centers of Bern and Zurich. These improvements are fairly robust in the face of bias due to some unobserved covariate and covary temporally and spatially with the implementation of community policing. The alternative hypothesis that the observed reductions in fear of crime were at least in part a result of community policing interventions thus appears at least as plausible as the null hypothesis of absolutely no effect, even if the observational design cannot completely rule out selection and regression to the mean as alternative explanations.
Resumo:
A retarded backward equation for a non-Markovian process induced by dichotomous noise (the random telegraphic signal) is deduced. The mean-first-passage time of this process is exactly obtained. The Gaussian white noise and the white shot noise limits are studied. Explicit physical results in first approximation are evaluated.
Resumo:
We present analytical calculations of the turn-on-time probability distribution of intensity-modulated lasers under resonant weak optical feedback. Under resonant conditions, the external cavity round-trip time is taken to be equal to the modulation period. The probability distribution of the solitary laser results are modified to give reduced values of the mean turn-on-time and its variance. Numerical simulations have been carried out showing good agreement with the analytical results.
Resumo:
Antemortem demonstration of ischemia has proved elusive in head injury because regional CBF reductions may represent hypoperfusion appropriately coupled to hypometabolism. Fifteen patients underwent positron emission tomography within 24 hours of head injury to map cerebral blood flow (CBF), cerebral oxygen metabolism (CMRO2), and oxygen extraction fraction (OEF). We estimated the volume of ischemic brain (IBV) and used the standard deviation of the OEF distribution to estimate the efficiency of coupling between CBF and CMRO2. The IBV in patients was significantly higher than controls (67 +/- 69 vs. 2 +/- 3 mL; P < 0.01). The coexistence of relative ischemia and hyperemia in some patients implies mismatching of perfusion to oxygen use. Whereas the saturation of jugular bulb blood (SjO2) correlated with the IBV (r = 0.8, P < 0.01), SjO2 values of 50% were only achieved at an IBV of 170 +/- 63 mL (mean +/- 95% CI), which equates to 13 +/- 5% of the brain. Increases in IBV correlated with a poor Glasgow Outcome Score 6 months after injury (rho = -0.6, P < 0.05). These results suggest significant ischemia within the first day after head injury. The ischemic burden represented by this "traumatic penumbra" is poorly detected by bedside clinical monitors and has significant associations with outcome.
Roadway Lighting and Safety: Phase II – Monitoring Quality, Durability and Efficiency, November 2011
Resumo:
This Phase II project follows a previous project titled Strategies to Address Nighttime Crashes at Rural, Unsignalized Intersections. Based on the results of the previous study, the Iowa Highway Research Board (IHRB) indicated interest in pursuing further research to address the quality of lighting, rather than just the presence of light, with respect to safety. The research team supplemented the literature review from the previous study, specifically addressing lighting level in terms of measurement, the relationship between light levels and safety, and lamp durability and efficiency. The Center for Transportation Research and Education (CTRE) teamed with a national research leader in roadway lighting, Virginia Tech Transportation Institute (VTTI) to collect the data. An integral instrument to the data collection efforts was the creation of the Roadway Monitoring System (RMS). The RMS allowed the research team to collect lighting data and approach information for each rural intersection identified in the previous phase. After data cleanup, the final data set contained illuminance data for 101 lighted intersections (of 137 lighted intersections in the first study). Data analysis included a robust statistical analysis based on Bayesian techniques. Average illuminance, average glare, and average uniformity ratio values were used to classify quality of lighting at the intersections.
Resumo:
In the November 2011 report issued by the Governor’s Transportation 2020 Citizen Advisory Commission (CAC), the commission recommended the Iowa Department of Transportation (DOT), at least annually, convene meetings with the cities and counties to review the operation, maintenance and improvement of Iowa’s public roadway system to identify ways to jointly increase efficiency. In response to this recommendation, Gov. Branstad directed the Iowa DOT to begin this effort immediately with a target of identifying $50 million of efficiency savings that can be captured from the $1.2 billion of Road Use Tax Funds (RUTF) provided to the Iowa DOT, cities and counties to administer, maintain and improve the public roadway system. This would build upon past joint and individual actions that have reduced administrative costs and resulted in increased funding for system improvements. Efficiency actions should be quantified, measured and reported to the public on a regular basis. Beyond the discussion of identifying funding solutions to our road and bridge needs, it is critical that all jurisdictions that own, maintain and improve the nation’s road and bridge systems demonstrate to the public these funds are utilized in the most efficient and effective manner. This requires continual innovation in all aspects of transportation planning, design, construction and maintenance - done in a transparent manner to clearly demonstrate to the public how their funds are being utilized. The Iowa DOT has identified 13 efficiency measures separated into two distinct categories – Program Efficiencies and Partnership Efficiencies. The total value of the efficiency measures is $50 million. Many of the efficiency items will need input, refinement and partnership from cities, counties, other local jurisdictions, and stakeholder interest groups. The Iowa DOT has begun meetings with many of these groups to help identify potential efficiency measures and strategies for moving forward. These partnerships and discussions will continue through implementation of the efficiency measures. Dependent on the measures identified, additional action may be required by the legislature, Iowa Transportation Commission, and/or other bodies to implement the action. In addition, a formal process will be developed to quantify, measure and report the results of actions taken on a regular basis.