935 resultados para System test complexity


Relevância:

30.00% 30.00%

Publicador:

Resumo:

Writing unit tests for legacy systems is a key maintenance task. When writing tests for object-oriented programs, objects need to be set up and the expected effects of executing the unit under test need to be verified. If developers lack internal knowledge of a system, the task of writing tests is non-trivial. To address this problem, we propose an approach that exposes side effects detected in example runs of the system and uses these side effects to guide the developer when writing tests. We introduce a visualization called Test Blueprint, through which we identify what the required fixture is and what assertions are needed to verify the correct behavior of a unit under test. The dynamic analysis technique that underlies our approach is based on both tracing method executions and on tracking the flow of objects at runtime. To demonstrate the usefulness of our approach we present results from two case studies.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Virtual machines emulating hardware devices are generally implemented in low-level languages and using a low-level style for performance reasons. This trend results in largely difficult to understand, difficult to extend and unmaintainable systems. As new general techniques for virtual machines arise, it gets harder to incorporate or test these techniques because of early design and optimization decisions. In this paper we show how such decisions can be postponed to later phases by separating virtual machine implementation issues from the high-level machine-specific model. We construct compact models of whole-system VMs in a high-level language, which exclude all low-level implementation details. We use the pluggable translation toolchain PyPy to translate those models to executables. During the translation process, the toolchain reintroduces the VM implementation and optimization details for specific target platforms. As a case study we implement an executable model of a hardware gaming device. We show that our approach to VM building increases understandability, maintainability and extendability while preserving performance.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Virtual machines (VMs) emulating hardware devices are generally implemented in low-level languages for performance reasons. This results in unmaintainable systems that are difficult to understand. In this paper we report on our experience using the PyPy toolchain to improve the portability and reduce the complexity of whole-system VM implementations. As a case study we implement a VM prototype for a Nintendo Game Boy, called PyGirl, in which the high-level model is separated from low-level VM implementation issues. We shed light on the process of refactoring from a low-level VM implementation in Java to a high-level model in RPython. We show that our whole-system VM written with PyPy is significantly less complex than standard implementations, without substantial loss in performance.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The Chair of Transportation and Ware-housing at the University of Dortmund together with its industrial partner has developed and implemented a decentralized control system based on embedded technology and Internet standards. This innovative, highly flexible system uses autonomous software modules to control the flow of unit loads in real-time. The system is integrated into Chair’s test facility consisting of a wide range of conveying and sorting equipment. It is built for proof of concept purposes and will be used for further research in the fields of decentralized automation and embedded controls. This presentation describes the implementation of this decentralized control system.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The three-step test is central to the regulation of copyright limitations at the international level. Delineating the room for exemptions with abstract criteria, the three-step test is by far the most important and comprehensive basis for the introduction of national use privileges. It is an essential, flexible element in the international limitation infrastructure that allows national law makers to satisfy domestic social, cultural, and economic needs. Given the universal field of application that follows from the test’s open-ended wording, the provision creates much more breathing space than the more specific exceptions recognized in international copyright law. EC copyright legislation, however, fails to take advantage of the flexibility inherent in the three-step test. Instead of using the international provision as a means to open up the closed EC catalogue of permissible exceptions, offer sufficient breathing space for social, cultural, and economic needs, and enable EC copyright law to keep pace with the rapid development of the Internet, the Copyright Directive 2001/29/EC encourages the application of the three-step test to further restrict statutory exceptions that are often defined narrowly in national legislation anyway. In the current online environment, however, enhanced flexibility in the field of copyright limitations is indispensable. From a social and cultural perspective, the web 2.0 promotes and enhances freedom of expression and information with its advanced search engine services, interactive platforms, and various forms of user-generated content. From an economic perspective, it creates a parallel universe of traditional content providers relying on copyright protection, and emerging Internet industries whose further development depends on robust copyright limita- tions. In particular, the newcomers in the online market – social networking sites, video forums, and virtual worlds – promise a remarkable potential for economic growth that has already attracted the attention of the OECD. Against this background, the time is ripe to debate the introduction of an EC fair use doctrine on the basis of the three-step test. Otherwise, EC copyright law is likely to frustrate important opportunities for cultural, social, and economic development. To lay groundwork for the debate, the differences between the continental European and the Anglo-American approach to copyright limitations (section 1), and the specific merits of these two distinct approaches (section 2), will be discussed first. An analysis of current problems that have arisen under the present dysfunctional EC system (section 3) will then serve as a starting point for proposing an EC fair use doctrine based on the three-step test (section 4). Drawing conclusions, the international dimension of this fair use proposal will be considered (section 5).

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Generative Fertigungsverfahren haben sich in den letzten Jahren als effektive Werkzeuge für die schnelle Entwicklung von Produkten nahezu beliebiger Komplexität entwickelt. Gleichzeitig wird gefordert, die Reproduzierbarkeit der Bauteile und auch seriennahe bzw. seriengleiche Eigenschaften zu gewährleisten. Die Vielfalt und der Umfang der Anwendungen sowie die große Anzahl verschiedener generativer Fertigungsverfahren verlangen adäquate Qualitätsüberwachungs- und Qualitätskontrollsysteme. Ein Lösungsansatz für die Qualitätsbewertung von generativen Fertigungsverfahren besteht in der Einführung eines Kennzahlensystems. Hierzu müssen zunächst Anforderungsprofile und Qualitätsmerkmale für generativ hergestellte Bauteile definiert werden, welche durch Prüfkörpergeometrien abgebildet und mit Hilfe von Einzelkennzahlen klassifiziert werden. In Rahmen der durchgeführten Untersuchungen wurde die Qualitätsbewertung anhand von Prüfkörpergeometrien am Beispiel des Laser-Sinterprozesses qualifiziert. Durch Beeinflussung der Prozessparameter, d.h. der gezielten Einbringung von Störgrößen, welche einzeln oder in Kombination zu unzulässigen Qualitätsschwankungen führen können, ist es möglich, die Qualität des Produktes zu beurteilen. Die Definition von Einzelkennzahlen, die eine Steuerung und Kontrolle sowie eine Vorhersage potentieller Fehler ermöglicht, bietet hierbei essentielle Möglichkeiten zur Qualitätsbewertung. Eine Zusammenführung zu einem gesamtheitlichen Kennzahlensystem soll zum einen den Prozess auf Grundlage der definierten Anforderungsprofile bewerten und zum anderen einen direkten Zusammenhang der ausgewählten Störgrößen und Prozessgrößen herleiten, um vorab eine Aussage über die Bauteilqualität treffen zu können.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

BACKGROUND Little is known about the vasomotor function of human coronary collateral vessels. The purpose of this study was to examine collateral flow under a strong sympathetic stimulus (cold pressor test, CPT). METHODS In 30 patients (62 +/- 12 years) with coronary artery disease, two subsequent coronary artery occlusions were performed with random CPT during one of them. Two minutes before and during the 1 minute-occlusion, the patient's hand was immerged in ice water. For the calculation of a perfusion pressure-independent collateral flow index (CFI), the aortic (Pao), the central venous (CVP) and the coronary wedge pressure (Poccl) were measured: CFI = (Poccl - CVP)/(Pao - CVP). RESULTS CPT lead to an increase in Pao from 98 +/- 14 to 105 +/- 15 mm Hg (p = 0.002). Without and with CPT, CFI increased during occlusion from 14% +/- 10% to 16% +/- 10% (p = 0.03) and from 17% +/- 9% to 19% +/- 9% (p = 0.006), respectively, relative to normal flow. During CPT, CFI was significantly higher at the beginning as well as at the end of the occlusion compared to identical instants without CPT. CFI at the end of the control occlusion did not differ significantly from the CFI at the beginning of occlusion with CPT. CONCLUSIONS During balloon occlusion, collateral flow increased due to collateral recruitment independent of external sympathetic stimulation. Sympathetic stimulation using CPT additionally augmented collateral flow. The collateral-flow-increasing effect of CPT is comparable to the recruitment effect of the occlusion itself. This may reflect a coronary collateral vasodilation mediated by the sympathetic nervous system.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

BACKGROUND AND STUDY AIMS Colorectal cancer (CRC) incidence ranks third among all cancers in Switzerland. Screening the general population could decrease CRC incidence and mortality. The aim of this study was to analyze the use of the fecal occult blood test (FOBT) and lower gastrointestinal endoscopy in a representative sample of the Swiss population aged ≥ 50 years. METHODS Data were analyzed from the 2007 Swiss Health Interview Survey and the prevalence estimates and 95 % confidence intervals were calculated based on all instances of lower gastrointestinal endoscopy and FOBT use, as well as on their use for CRC screening. Uni- and multivariate logistic regression analyses were performed to describe the association between screening use and sociodemographic characteristics, indicators of healthcare system use, and lifestyle factors. RESULTS In 2007, approximately 36 % of the surveyed people who were aged ≥ 50 years had previously undergone FOBT and approximately 30 % had previously undergone lower gastrointestinal endoscopy. CRC screening use was 7.7 % for FOBT (within the past year) and 6.4 % for lower gastrointestinal endoscopy (within the past 5 years). CRC screening by either method was 13 %. The major determinants of participation in CRC screening were found to be sex (male), physician visits during the past year (one or more), type of health insurance (private), and previous screening for other cancer types. CONCLUSIONS The results of the 2007 Swiss Health Interview Survey indicate rather low levels of FOBT and lower gastrointestinal endoscopy use. Furthermore, the results suggest disparities in the use of CRC screening.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In a previous paper, we presented a proposed expansion of the National Guideline Clearing-house (NGC) classification1. We performed a preliminary evaluation of the classification based on 100 guidelines randomly selected from the NGC collection. We found that 89 of the 100 guidelines could be assigned to a single guideline category. To test inter-observer agreement, twenty guidelines were also categorized by a second investigator. Agreement was found to be 40-90% depending on the axis, which compares favorably with agreement among MeSH indexers (30-60%)2. We conclude that categorization is feasible. Further research is needed to clarify axes with poor inter-observer agreement.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

OBJECTIVES: To detect the influence of blood contamination (BC) on the bond strength (BS) of a self-etching bonding system (SES) to enamel and dentine. METHODS: 25 human molars were longitudinally sectioned on the mesio-distal axis in order to obtain 50 specimens, which were embedded in acrylic resin. At first, the specimens were ground to expose a flat surface of enamel, and a bond strength test was performed. Afterwards, the samples were ground again in order to obtain a flat surface of dentine. Ten groups (total: n=100) were assigned according to substrate (enamel and dentine), step in the bonding sequence when contamination occurred (before the acidic primer and after the bonding resin), and contamination treatment (dry or rinse and dry procedure). Fresh human blood was introduced either before or after SES application (Clearfil SE Bond) and treated with air drying, or by rinsing and drying following application. Composite resin (Filtek Z-250,3M ESPE) was applied as inverted, truncated cured cones that were debonded in tension. RESULTS: The mean tensile BS values (MPa) for enamel/dentine were 19.4/23.0 and 17.1/10.0 for rinse-and-dry treatment (contamination before and after SES, respectively); while the measurements for the dry treatment, 16.2/23.3 and 0.0/0.0 contamination before and after SES, respectively. CONCLUSIONS: It was determined that blood contamination impaired adhesion to enamel and dentine when it occurred after bond light curing. Among the tested contamination treatments, the rinse-and-dry treatment produced the highest bond strength with BC after SES application, but it was not sufficient to recover the BS in the contamination-free group.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Heat shock protein 70 (Hsp70) plays a central role in protein homeostasis and quality control in conjunction with other chaperone machines, including Hsp90. The Hsp110 chaperone Sse1 promotes Hsp90 activity in yeast, and functions as a nucleotide exchange factor (NEF) for cytosolic Hsp70, but the precise roles Sse1 plays in client maturation through the Hsp70-Hsp90 chaperone system are not fully understood. We find that upon pharmacological inhibition of Hsp90, a model protein kinase, Ste11DeltaN, is rapidly degraded, whereas heterologously expressed glucocorticoid receptor (GR) remains stable. Hsp70 binding and nucleotide exchange by Sse1 was required for GR maturation and signaling through endogenous Ste11, as well as to promote Ste11DeltaN degradation. Overexpression of another functional NEF partially compensated for loss of Sse1, whereas the paralog Sse2 fully restored GR maturation and Ste11DeltaN degradation. Sse1 was required for ubiquitinylation of Ste11DeltaN upon Hsp90 inhibition, providing a mechanistic explanation for its role in substrate degradation. Sse1/2 copurified with Hsp70 and other proteins comprising the "early-stage" Hsp90 complex, and was absent from "late-stage" Hsp90 complexes characterized by the presence of Sba1/p23. These findings support a model in which Hsp110 chaperones contribute significantly to the decision made by Hsp70 to fold or degrade a client protein.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

BACKGROUND: Follow-up of abnormal outpatient laboratory test results is a major patient safety concern. Electronic medical records can potentially address this concern through automated notification. We examined whether automated notifications of abnormal laboratory results (alerts) in an integrated electronic medical record resulted in timely follow-up actions. METHODS: We studied 4 alerts: hemoglobin A1c > or =15%, positive hepatitis C antibody, prostate-specific antigen > or =15 ng/mL, and thyroid-stimulating hormone > or =15 mIU/L. An alert tracking system determined whether the alert was acknowledged (ie, provider clicked on and opened the message) within 2 weeks of transmission; acknowledged alerts were considered read. Within 30 days of result transmission, record review and provider contact determined follow-up actions (eg, patient contact, treatment). Multivariable logistic regression models analyzed predictors for lack of timely follow-up. RESULTS: Between May and December 2008, 78,158 tests (hemoglobin A1c, hepatitis C antibody, thyroid-stimulating hormone, and prostate-specific antigen) were performed, of which 1163 (1.48%) were transmitted as alerts; 10.2% of these (119/1163) were unacknowledged. Timely follow-up was lacking in 79 (6.8%), and was statistically not different for acknowledged and unacknowledged alerts (6.4% vs 10.1%; P =.13). Of 1163 alerts, 202 (17.4%) arose from unnecessarily ordered (redundant) tests. Alerts for a new versus known diagnosis were more likely to lack timely follow-up (odds ratio 7.35; 95% confidence interval, 4.16-12.97), whereas alerts related to redundant tests were less likely to lack timely follow-up (odds ratio 0.24; 95% confidence interval, 0.07-0.84). CONCLUSIONS: Safety concerns related to timely patient follow-up remain despite automated notification of non-life-threatening abnormal laboratory results in the outpatient setting.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

BACKGROUND: Given the fragmentation of outpatient care, timely follow-up of abnormal diagnostic imaging results remains a challenge. We hypothesized that an electronic medical record (EMR) that facilitates the transmission and availability of critical imaging results through either automated notification (alerting) or direct access to the primary report would eliminate this problem. METHODS: We studied critical imaging alert notifications in the outpatient setting of a tertiary care Department of Veterans Affairs facility from November 2007 to June 2008. Tracking software determined whether the alert was acknowledged (ie, health care practitioner/provider [HCP] opened the message for viewing) within 2 weeks of transmission; acknowledged alerts were considered read. We reviewed medical records and contacted HCPs to determine timely follow-up actions (eg, ordering a follow-up test or consultation) within 4 weeks of transmission. Multivariable logistic regression models accounting for clustering effect by HCPs analyzed predictors for 2 outcomes: lack of acknowledgment and lack of timely follow-up. RESULTS: Of 123 638 studies (including radiographs, computed tomographic scans, ultrasonograms, magnetic resonance images, and mammograms), 1196 images (0.97%) generated alerts; 217 (18.1%) of these were unacknowledged. Alerts had a higher risk of being unacknowledged when the ordering HCPs were trainees (odds ratio [OR], 5.58; 95% confidence interval [CI], 2.86-10.89) and when dual-alert (>1 HCP alerted) as opposed to single-alert communication was used (OR, 2.02; 95% CI, 1.22-3.36). Timely follow-up was lacking in 92 (7.7% of all alerts) and was similar for acknowledged and unacknowledged alerts (7.3% vs 9.7%; P = .22). Risk for lack of timely follow-up was higher with dual-alert communication (OR, 1.99; 95% CI, 1.06-3.48) but lower when additional verbal communication was used by the radiologist (OR, 0.12; 95% CI, 0.04-0.38). Nearly all abnormal results lacking timely follow-up at 4 weeks were eventually found to have measurable clinical impact in terms of further diagnostic testing or treatment. CONCLUSIONS: Critical imaging results may not receive timely follow-up actions even when HCPs receive and read results in an advanced, integrated electronic medical record system. A multidisciplinary approach is needed to improve patient safety in this area.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A patient classification system was developed integrating a patient acuity instrument with a computerized nursing distribution method based on a linear programming model. The system was designed for real-time measurement of patient acuity (workload) and allocation of nursing personnel to optimize the utilization of resources.^ The acuity instrument was a prototype tool with eight categories of patients defined by patient severity and nursing intensity parameters. From this tool, the demand for nursing care was defined in patient points with one point equal to one hour of RN time. Validity and reliability of the instrument was determined as follows: (1) Content validity by a panel of expert nurses; (2) predictive validity through a paired t-test analysis of preshift and postshift categorization of patients; (3) initial reliability by a one month pilot of the instrument in a practice setting; and (4) interrater reliability by the Kappa statistic.^ The nursing distribution system was a linear programming model using a branch and bound technique for obtaining integer solutions. The objective function was to minimize the total number of nursing personnel used by optimally assigning the staff to meet the acuity needs of the units. A penalty weight was used as a coefficient of the objective function variables to define priorities for allocation of staff.^ The demand constraints were requirements to meet the total acuity points needed for each unit and to have a minimum number of RNs on each unit. Supply constraints were: (1) total availability of each type of staff and the value of that staff member (value was determined relative to that type of staff's ability to perform the job function of an RN (i.e., value for eight hours RN = 8 points, LVN = 6 points); (2) number of personnel available for floating between units.^ The capability of the model to assign staff quantitatively and qualitatively equal to the manual method was established by a thirty day comparison. Sensitivity testing demonstrated appropriate adjustment of the optimal solution to changes in penalty coefficients in the objective function and to acuity totals in the demand constraints.^ Further investigation of the model documented: correct adjustment of assignments in response to staff value changes; and cost minimization by an addition of a dollar coefficient to the objective function. ^

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The previously described Nc5-specific PCR test for the diagnosis of Neospora caninum infections was used to develop a quantitative PCR assay which allows the determination of infection intensities within different experimental and diagnostic sample groups. The quantitative PCR was performed by using a dual fluorescent hybridization probe system and the LightCycler Instrument for online detection of amplified DNA. This assay was successfully applied for demonstrating the parasite proliferation kinetics in organotypic slice cultures of rat brain which were infected in vitro with N. caninum tachyzoites. This PCR-based method of parasite quantitation with organotypic brain tissue samples can be regarded as a novel ex vivo approach for exploring different aspects of cerebral N. caninum infection.