958 resultados para Test Management
Resumo:
Introduction Vertebral fracture is one of the major osteoporoticfractures which are unfortunately very often undetected. In addition,it is well known that prevalent vertebral fracture increases dramaticallythe risk of future additional fracture. Instant Vertebral Assessment(IVA) has been introduced in DXA device a couple of years ago toease the detection of such fracture when routine DXA are performed.To correctly use such tool, ISCD provided clinical recommendationon when and how to use it. The aim of our study was to evaluate theISCD guidelines in clinical routine patients and see how often itmay change of patient management.Methods During two months (March and April 2010), a medicalquestionnaire was systematically given to our clinical routine patientto check the validity of ISCD IVA recommendations in our population.In addition, all women had BMD measurement at AP spine,femur and 1/3 radius using a Discovery A System (Hologic, Waltham,USA). When appropriate, IVA measurement had been performedon the same DXA system and had been centrally evaluated by twotrained doctors for fracture status according to the semi-quantitativemethod of Genant. The reading had been performed when possiblebetween L5 and T4.Results Out of 210 women seen in the consultation, 109 (52 %)of them (mean age 68.2 ± 11.5 years) fulfilled the necessary criteriato have an IVA measurement. Out of these 109 women, 43 (incidence39.4 %) had osteoporosis at one of the three skeletal sitesand 31 (incidence 28.4 %) had at least one vertebral fracture. 14.7 %of women had both osteoporosis and at least one vertebral fractureclassifying them as "severe osteoporosis" while 46.8 % did not haveosteoporosis and no vertebral fracture. 24.8 % of the women hadosteoporosis but no vertebral fracture while 13.8 % of women didhave osteoporosis but vertebral fracture (clinical osteoporosis).Conclusions In 52 % of our patients, IVA was needed accordingto ISCD criteria. In half of them the IVA test influenced of patientmanagement either may changing the type of treatment of simplyby classifying patient as "clinical osteoporosis". IVA appears to bean important tool in clinical routine but unfortunately is not yetvery often use in most of the centers.
Resumo:
Purpose: To set local dose reference levels (DRL) that allow radiologists to control stochastic and deterministic effects. Methods and materials: Dose indicators for cerebral angiographies and hepatic embolizations were collected during 4 months and analyzed in our hospital. The data were compared when an image amplifier was used instead of a flat panel detector. The Mann and Whitney test was used. Results: For the 40 cerebral angiographies performed the DRL for DAP, fluoroscopy time and number of images were respectively: 166 Gy.cm2, 19 min, 600. The maximum DAP was 490 Gy.cm2 (fluoroscopy time: 84 min). No significant difference for fluoroscopy time and DAP for image amplifier and flat panel detector (p = 0.88) was observed. The number of images was larger for flat panel detector (p = 0.004). The values obtained were slightly over the present proposed DRL: 150 Gy.cm2, 15 min, 400. Concerning the 13 hepatic embolizations the DRL for DAP fluoroscopy time and number of images were: 315 Gy.cm2, 25 min, 370. The maximum DAP delivered was 845 Gy.cm2 (fluoroscopy time of 48 min). No significant difference between image amplifier and flat panel detector was observed (p = 0.005). The values obtained were also slightly over the present proposed DRL: 300 Gy.cm2, 20 min, 200. Conclusion: These results show that the introduction of flat panel detector did not lead to an increase in patient dose. A DRL concerning the cumulative dose (that allow to control the deterministic effect) should be introduced to allow radiologists to have full control on the risks associated with ionizing radiations. Results of this on going study will be presented.
Resumo:
Objective This study assessed pharmacological treatment adherence using the Morisky-Green Test and identified related variables. Method A longitudinal and retrospective study examined 283 patients with hypertension (62.5% women, 73.4 [10.9] years old) who were being monitored by a chronic disease management program for 17 months between 2011 and 2012. Nurses performed all the actions of the program, which consisted of advice via telephone and periodic home visits based on the risk stratification of the patients. Results A significant increase in treatment adherence (25.1% vs. 85.5%) and a decrease in blood pressure were observed (p<0.05). Patients with hypertension and chronic renal failure as well as those treated using angiotensin-converting enzyme inhibitors were the most adherent (p<0.05). Patients with hypertension who received angiotensin receptor blockers were less adherent (p<0.05). Conclusions Strategies such as nurse-performed chronic disease management can increase adherence to anti-hypertensive treatment and therefore contribute to the control of blood pressure, minimizing the morbidity profiles of patients with hypertension.
Resumo:
The aim was to propose a strategy for finding reasonable compromises between image noise and dose as a function of patient weight. Weighted CT dose index (CTDI(w)) was measured on a multidetector-row CT unit using CTDI test objects of 16, 24 and 32 cm in diameter at 80, 100, 120 and 140 kV. These test objects were then scanned in helical mode using a wide range of tube currents and voltages with a reconstructed slice thickness of 5 mm. For each set of acquisition parameter image noise was measured and the Rose model observer was used to test two strategies for proposing a reasonable compromise between dose and low-contrast detection performance: (1) the use of a unique noise level for all test object diameters, and (2) the use of a unique dose efficacy level defined as the noise reduction per unit dose. Published data were used to define four weight classes and an acquisition protocol was proposed for each class. The protocols have been applied in clinical routine for more than one year. CTDI(vol) values of 6.7, 9.4, 15.9 and 24.5 mGy were proposed for the following weight classes: 2.5-5, 5-15, 15-30 and 30-50 kg with image noise levels in the range of 10-15 HU. The proposed method allows patient dose and image noise to be controlled in such a way that dose reduction does not impair the detection of low-contrast lesions. The proposed values correspond to high- quality images and can be reduced if only high-contrast organs are assessed.
Resumo:
Revenue management (RM) is a complicated business process that can best be described ascontrol of sales (using prices, restrictions, or capacity), usually using software as a tool to aiddecisions. RM software can play a mere informative role, supplying analysts with formatted andsummarized data who use it to make control decisions (setting a price or allocating capacity fora price point), or, play a deeper role, automating the decisions process completely, at the otherextreme. The RM models and algorithms in the academic literature by and large concentrateon the latter, completely automated, level of functionality.A firm considering using a new RM model or RM system needs to evaluate its performance.Academic papers justify the performance of their models using simulations, where customerbooking requests are simulated according to some process and model, and the revenue perfor-mance of the algorithm compared to an alternate set of algorithms. Such simulations, whilean accepted part of the academic literature, and indeed providing research insight, often lackcredibility with management. Even methodologically, they are usually awed, as the simula-tions only test \within-model" performance, and say nothing as to the appropriateness of themodel in the first place. Even simulations that test against alternate models or competition arelimited by their inherent necessity on fixing some model as the universe for their testing. Theseproblems are exacerbated with RM models that attempt to model customer purchase behav-ior or competition, as the right models for competitive actions or customer purchases remainsomewhat of a mystery, or at least with no consensus on their validity.How then to validate a model? Putting it another way, we want to show that a particularmodel or algorithm is the cause of a certain improvement to the RM process compared to theexisting process. We take care to emphasize that we want to prove the said model as the causeof performance, and to compare against a (incumbent) process rather than against an alternatemodel.In this paper we describe a \live" testing experiment that we conducted at Iberia Airlineson a set of flights. A set of competing algorithms control a set of flights during adjacentweeks, and their behavior and results are observed over a relatively long period of time (9months). In parallel, a group of control flights were managed using the traditional mix of manualand algorithmic control (incumbent system). Such \sandbox" testing, while common at manylarge internet search and e-commerce companies is relatively rare in the revenue managementarea. Sandbox testing has an undisputable model of customer behavior but the experimentaldesign and analysis of results is less clear. In this paper we describe the philosophy behind theexperiment, the organizational challenges, the design and setup of the experiment, and outlinethe analysis of the results. This paper is a complement to a (more technical) related paper thatdescribes the econometrics and statistical analysis of the results.
Resumo:
OBJECTIVE: We sought to describe our experience in the management of complex glotto-subglottic stenosis in the pediatric age group. METHODS: Between 1978 and 2008, 33 children with glotto-subglottic stenosis underwent partial cricotracheal resection, and they form the focus of this study. They were compared with 67 children with isolated subglottic stenosis (no glottic involvement). The outcomes measured were need for revision open surgical intervention, delayed decannulation (>6 months), and operation-specific and overall decannulation rates. Fisher's exact test was used for comparison of outcomes. RESULTS: Results of preoperative evaluation showed Myer-Cotton grade III or IV stenosis in 32 (97%) patients and grade II stenosis in 1 patient. All patients with glotto-subglottic stenosis were treated with partial cricotracheal resection and simultaneous repair of the glottic pathology. Bilateral fixed vocal cords were seen in 19 (58%) of 33 patients, bilateral restricted abduction was seen in 7 (21%) of 33 patients, and unilateral fixed vocal cord was seen in 7 (21%) of 33 patients. Ten patients underwent single-stage partial cricotracheal resection with excision of interarytenoid scar tissue. The endotracheal tube was kept for a mean period of 7 days as a stent. Twenty-three patients underwent extended partial cricotracheal resection with LT-Mold (Bredam S.A., St. Sulpice, Switzerland) or T-tube stenting. The overall decannulation rate included 26 (79%) patients, and the operation-specific decannulation rate included 20 (61%) patients. CONCLUSIONS: Glotto-subglottic stenosis is a complex laryngeal injury associated with delayed decannulation and decreased overall and operation-specific decannulation rates when compared with those after subglottic stenosis without glottic involvement after partial cricotracheal resection.
Resumo:
Although it is commonly accepted that most macroeconomic variables are nonstationary, it is often difficult to identify the source of the non-stationarity. In particular, it is well-known that integrated and short memory models containing trending components that may display sudden changes in their parameters share some statistical properties that make their identification a hard task. The goal of this paper is to extend the classical testing framework for I(1) versus I(0)+ breaks by considering a a more general class of models under the null hypothesis: non-stationary fractionally integrated (FI) processes. A similar identification problem holds in this broader setting which is shown to be a relevant issue from both a statistical and an economic perspective. The proposed test is developed in the time domain and is very simple to compute. The asymptotic properties of the new technique are derived and it is shown by simulation that it is very well-behaved in finite samples. To illustrate the usefulness of the proposed technique, an application using inflation data is also provided.
Resumo:
The paper explores the consequences that relying on different behavioral assumptions intraining managers may have on their future performance. We argue that training with anemphasis on the standard assumptions used in economics (rationality and self-interest) is goodfor technical posts but may also lead future managers to rely excessively on rational and explicitsafeguarding, crowding out instinctive relational heuristics and signaling a bad human type topotential partners. In contrast, human assumptions used in management theories, because oftheir diverse, implicit and even contradictory nature, do not conflict with the innate set ofcooperative tools and may provide a good training ground for such tools. We present tentativeconfirmatory evidence by examining how the weight given to behavioral assumptions in the corecourses of the top 100 business schools influences the average salaries of their MBA graduates.Controlling for the self-selected average quality of their students and some other schools characteristics, average salaries are seen to be significantly greater for schools whose core MBAcourses contain a higher proportion of management courses as opposed to courses based oneconomics or technical disciplines.
Resumo:
An experimental test of rainfall as a control agent of Glycaspis brimblecombei Moore (Hemiptera, Psyllidae) on seedlings of Eucalyptus camaldulensis Dehn (Myrtaceae). Glycaspis brimblecombei is one the greatest threats to eucalyptus plantations in Brazil. The effects of rainfall to reduce the abundance of lerp of Glycaspis brimblecombei on experimentally infested seedlings of Eucalyptus camaldulensis were assessed. The number of lerps on the adaxial and abaxial surfaces of every leaf of 60 seedlings was recorded, before and after submission to the following treatments: "artificial rain", "leaf wetting" and control. A drastic reduction in lerp abundance per plant was observed after the treatments "leaf wetting" and artificial rain (F = 53.630; p < 0.001), whereas lerp abundance remained roughly constant in the control treatment along the experiment (F = 1.450; p = 0.232). At the end of the experiment, lerp abundance was significantly lower in both the "artificial rain" and "leaf wetting" than in the control treatment. Two days of rainfall simulation were sufficient to decrease more than 50% of the lerp population, with almost 100% of effectiveness after 5 days of experiment. Our results indicate that lerp solubilization and mechanical removal by water are potential tools to the population regulation of G. brimblecombei on E. camaldulensis seedlings.
Resumo:
This paper proposes a new time-domain test of a process being I(d), 0 < d = 1, under the null, against the alternative of being I(0) with deterministic components subject to structural breaks at known or unknown dates, with the goal of disentangling the existing identification issue between long-memory and structural breaks. Denoting by AB(t) the different types of structural breaks in the deterministic components of a time series considered by Perron (1989), the test statistic proposed here is based on the t-ratio (or the infimum of a sequence of t-ratios) of the estimated coefficient on yt-1 in an OLS regression of ?dyt on a simple transformation of the above-mentioned deterministic components and yt-1, possibly augmented by a suitable number of lags of ?dyt to account for serial correlation in the error terms. The case where d = 1 coincides with the Perron (1989) or the Zivot and Andrews (1992) approaches if the break date is known or unknown, respectively. The statistic is labelled as the SB-FDF (Structural Break-Fractional Dickey- Fuller) test, since it is based on the same principles as the well-known Dickey-Fuller unit root test. Both its asymptotic behavior and finite sample properties are analyzed, and two empirical applications are provided.
Resumo:
Relationships between porosity and hydraulic conductivity tend to be strongly scale- and site-dependent and are thus very difficult to establish. As a result, hydraulic conductivity distributions inferred from geophysically derived porosity models must be calibrated using some measurement of aquifer response. This type of calibration is potentially very valuable as it may allow for transport predictions within the considered hydrological unit at locations where only geophysical measurements are available, thus reducing the number of well tests required and thereby the costs of management and remediation. Here, we explore this concept through a series of numerical experiments. Considering the case of porosity characterization in saturated heterogeneous aquifers using crosshole ground-penetrating radar and borehole porosity log data, we use tracer test measurements to calibrate a relationship between porosity and hydraulic conductivity that allows the best prediction of the observed hydrological behavior. To examine the validity and effectiveness of the obtained relationship, we examine its performance at alternate locations not used in the calibration procedure. Our results indicate that this methodology allows us to obtain remarkably reliable hydrological predictions throughout the considered hydrological unit based on the geophysical data only. This was also found to be the case when significant uncertainty was considered in the underlying relationship between porosity and hydraulic conductivity.
Resumo:
This paper presents a thermal modeling for power management of a new three-dimensional (3-D) thinned dies stacking process. Besides the high concentration of power dissipating sources, which is the direct consequence of the very interesting integration efficiency increase, this new ultra-compact packaging technology can suffer of the poor thermal conductivity (about 700 times smaller than silicon one) of the benzocyclobutene (BCB) used as both adhesive and planarization layers in each level of the stack. Thermal simulation was conducted using three-dimensional (3-D) FEM tool to analyze the specific behaviors in such stacked structure and to optimize the design rules. This study first describes the heat transfer limitation through the vertical path by examining particularly the case of the high dissipating sources under small area. First results of characterization in transient regime by means of dedicated test device mounted in single level structure are presented. For the design optimization, the thermal draining capabilities of a copper grid or full copper plate embedded in the intermediate layer of stacked structure are evaluated as a function of the technological parameters and the physical properties. It is shown an interest for the transverse heat extraction under the buffer devices dissipating most the power and generally localized in the peripheral zone, and for the temperature uniformization, by heat spreading mechanism, in the localized regions where the attachment of the thin die is altered. Finally, all conclusions of this analysis are used for the quantitative projections of the thermal performance of a first demonstrator based on a three-levels stacking structure for space application.
Resumo:
The Proctor test is time-consuming and requires sampling of several kilograms of soil. Proctor test parameters were predicted in Mollisols, Entisols and Vertisols of the Pampean region of Argentina under different management systems. They were estimated from a minimum number of readily available soil properties (soil texture, total organic C) and management (training data set; n = 73). The results were used to generate a soil compaction susceptibility model, which was subsequently validated using a second group of independent data (test data set; n = 24). Soil maximum bulk density was estimated as follows: Maximum bulk density (Mg m-3) = 1.4756 - 0.00599 total organic C (g kg-1) + 0.0000275 sand (g kg-1) + 0.0539 management. Management was equal to 0 for uncropped and untilled soils and 1 for conventionally tilled soils. The established models predicted the Proctor test parameters reasonably well, based on readily available soil properties. Tillage systems induced changes in the maximum bulk density regardless of total organic matter content or soil texture. The lower maximum apparent bulk density values under no-tillage require a revision of the relative compaction thresholds for different no-tillage crops.
Resumo:
It is well-known nowadays that soil variability can influence crop yields. Therefore, to determine specific areas of soil management, we studied the Pearson and spatial correlations of rice grain yield with organic matter content and pH of an Oxisol (Typic Acrustox) under no- tillage, in the 2009/10 growing season, in Selvíria, State of Mato Grosso do Sul, in the Brazilian Cerrado (longitude 51º24' 21'' W, latitude 20º20' 56'' S). The upland rice cultivar IAC 202 was used as test plant. A geostatistical grid was installed for soil and plant data collection, with 120 sampling points in an area of 3.0 ha with a homogeneous slope of 0.055 m m-1. The properties rice grain yield and organic matter content, pH and potential acidity and aluminum content were analyzed in the 0-0.10 and 0.10-0.20 m soil layers. Spatially, two specific areas of agricultural land management were discriminated, differing in the value of organic matter and rice grain yield, respectively with fertilization at variable rates in the second zone, a substantial increase in agricultural productivity can be obtained. The organic matter content was confirmed as a good indicator of soil quality, when spatially correlated with rice grain yield.
Resumo:
Geographic information systems (GIS) and artificial intelligence (AI) techniques were used to develop an intelligent snow removal asset management system (SRAMS). The system has been evaluated through a case study examining snow removal from the roads in Black Hawk County, Iowa, for which the Iowa Department of Transportation (Iowa DOT) is responsible. The SRAMS is comprised of an expert system that contains the logical rules and expertise of the Iowa DOT’s snow removal experts in Black Hawk County, and a geographic information system to access and manage road data. The system is implemented on a mid-range PC by integrating MapObjects 2.1 (a GIS package), Visual Rule Studio 2.2 (an AI shell), and Visual Basic 6.0 (a programming tool). The system could efficiently be used to generate prioritized snowplowing routes in visual format, to optimize the allocation of assets for plowing, and to track materials (e.g., salt and sand). A test of the system reveals an improvement in snowplowing time by 1.9 percent for moderate snowfall and 9.7 percent for snowstorm conditions over the current manual system.