910 resultados para test case generation
Resumo:
Purpose: To evaluate the retinal nerve fiber layer measurements with time-domain (TD) and spectral-domain (SD) optical coherence tomography (OCT), and to test the diagnostic ability of both technologies in glaucomatous patients with asymmetric visual hemifield loss. Methods: 36 patients with primary open-angle glaucoma with visual field loss in one hemifield (affected) and absent loss in the other (non-affected), and 36 age-matched healthy controls had the study eye imaged with Stratus-OCT (Carl Zeiss Meditec Inc., Dublin, California, USA) and 3 D OCT-1000 (Topcon, Tokyo, Japan). Peripapillary retinal nerve fiber layer measurements and normative classification were recorded. Total deviation values were averaged in each hemifield (hemifield mean deviation) for each subject. Visual field and retinal nerve fiber layer "asymmetry indexes" were calculated as the ratio between affected versus non-affected hemifields and corresponding hemiretinas. Results: Retinal nerve fiber layer measurements in non-affected hemifields (mean [SD] 87.0 [17.1] mu m and 84.3 [20.2] mu m, for TD and SD-OCT, respectively) were thinner than in controls (119.0 [12.2] mu m and 117.0 [17.7] mu m, P<0.001). The optical coherence tomography normative database classified 42% and 67% of hemiretinas corresponding to non-affected hemifields as abnormal in TD and SD-OCT, respectively (P=0.01). Retinal nerve fiber layer measurements were consistently thicker with TD compared to SD-OCT. Retinal nerve fiber layer thickness asymmetry index was similar in TD (0.76 [0.17]) and SD-OCT (0.79 [0.12]) and significantly greater than the visual field asymmetry index (0.36 [0.20], P<0.001). Conclusions: Normal hemifields of glaucoma patients had thinner retinal nerve fiber layer than healthy eyes, as measured by TD and SD-OCT. Retinal nerve fiber layer measurements were thicker with TD than SD-OCT. SD-OCT detected abnormal retinal nerve fiber layer thickness more often than TD-OCT.
Resumo:
A long-standing problem when testing from a deterministic finite state machine is to guarantee full fault coverage even if the faults introduce extra states in the implementations. It is well known that such tests should include the sequences in a traversal set which contains all input sequences of length defined by the number of extra states. This paper suggests the SPY method, which helps reduce the length of tests by distributing sequences of the traversal set and reducing test branching. It is also demonstrated that an additional assumption about the implementation under test relaxes the requirement of the complete traversal set. The results of the experimental comparison of the proposed method with an existing method indicate that the resulting reduction can reach 40%. Experimental results suggest that the additional assumption about the implementation can help in further reducing the test suite length. Copyright (C) 2011 John Wiley & Sons, Ltd.
Resumo:
Objective: To evaluate serum concentrations of CA-125 and soluble CD-23 and to correlate them with clinical symptoms, localization and stage of pelvic endometriosis and histological classification of the disease. Methods: Blood samples were collected from 44 women with endometriosis and 58 without endometriosis, during the first three days (1st sample) and during the 7th, 8th and 9th day (2nd sample) of the menstrual cycle. Measurements of CA-125 and soluble CD-23 were performed by ELISA. Mann-Whitney U test was used for age, pain evaluations (visual analog scale) and biomarkers concentrations. Results: Serum levels Of CA-125 were higher in endometriosis patients when compared to the control group during both periods of the menstrual cycle evaluated in the study. This marker was also elevated in women with chronic pelvic pain, deep dyspareunia (2nd sample), dysmenorrhea (both samples) and painful defecation during the menstrual flow (2nd sample). CA-125 concentration was higher in advanced stages of the disease in both samples and also in women with ovarian endometrioma. Concerning CD-23, no statistically significant differences were observed between groups. Conclusion: The concentrations of CA-125 were higher in patients with endometriosis than in patients without the disease. No significantly differences were observed for soluble CD-23 levels between groups.
Resumo:
Abstract Background Accurate malaria diagnosis is mandatory for the treatment and management of severe cases. Moreover, individuals with asymptomatic malaria are not usually screened by health care facilities, which further complicates disease control efforts. The present study compared the performances of a malaria rapid diagnosis test (RDT), the thick blood smear method and nested PCR for the diagnosis of symptomatic malaria in the Brazilian Amazon. In addition, an innovative computational approach was tested for the diagnosis of asymptomatic malaria. Methods The study was divided in two parts. For the first part, passive case detection was performed in 311 individuals with malaria-related symptoms from a recently urbanized community in the Brazilian Amazon. A cross-sectional investigation compared the diagnostic performance of the RDT Optimal-IT, nested PCR and light microscopy. The second part of the study involved active case detection of asymptomatic malaria in 380 individuals from riverine communities in Rondônia, Brazil. The performances of microscopy, nested PCR and an expert computational system based on artificial neural networks (MalDANN) using epidemiological data were compared. Results Nested PCR was shown to be the gold standard for diagnosis of both symptomatic and asymptomatic malaria because it detected the major number of cases and presented the maximum specificity. Surprisingly, the RDT was superior to microscopy in the diagnosis of cases with low parasitaemia. Nevertheless, RDT could not discriminate the Plasmodium species in 12 cases of mixed infections (Plasmodium vivax + Plasmodium falciparum). Moreover, the microscopy presented low performance in the detection of asymptomatic cases (61.25% of correct diagnoses). The MalDANN system using epidemiological data was worse that the light microscopy (56% of correct diagnoses). However, when information regarding plasma levels of interleukin-10 and interferon-gamma were inputted, the MalDANN performance sensibly increased (80% correct diagnoses). Conclusions An RDT for malaria diagnosis may find a promising use in the Brazilian Amazon integrating a rational diagnostic approach. Despite the low performance of the MalDANN test using solely epidemiological data, an approach based on neural networks may be feasible in cases where simpler methods for discriminating individuals below and above threshold cytokine levels are available.
Resumo:
Abstract Background In areas with limited structure in place for microscopy diagnosis, rapid diagnostic tests (RDT) have been demonstrated to be effective. Method The cost-effectiveness of the Optimal® and thick smear microscopy was estimated and compared. Data were collected on remote areas of 12 municipalities in the Brazilian Amazon. Data sources included the National Malaria Control Programme of the Ministry of Health, the National Healthcare System reimbursement table, hospitalization records, primary data collected from the municipalities, and scientific literature. The perspective was that of the Brazilian public health system, the analytical horizon was from the start of fever until the diagnostic results provided to patient and the temporal reference was that of year 2006. The results were expressed in costs per adequately diagnosed cases in 2006 U.S. dollars. Sensitivity analysis was performed considering key model parameters. Results In the case base scenario, considering 92% and 95% sensitivity for thick smear microscopy to Plasmodium falciparum and Plasmodium vivax, respectively, and 100% specificity for both species, thick smear microscopy is more costly and more effective, with an incremental cost estimated at US$549.9 per adequately diagnosed case. In sensitivity analysis, when sensitivity and specificity of microscopy for P. vivax were 0.90 and 0.98, respectively, and when its sensitivity for P. falciparum was 0.83, the RDT was more cost-effective than microscopy. Conclusion Microscopy is more cost-effective than OptiMal® in these remote areas if high accuracy of microscopy is maintained in the field. Decision regarding use of rapid tests for diagnosis of malaria in these areas depends on current microscopy accuracy in the field.
Resumo:
Judo competitions are divided into weight classes. However, most athletes reduce their body weight in a few days before competition in order to obtain a competitive advantage over lighter opponents. To achieve fast weight reduction, athletes use a number of aggressive nutritional strategies so many of them place themselves at a high health-injury risk. In collegiate wrestling, a similar problem has been observed and three wrestlers died in 1997 due to rapid weight loss regimes. After these deaths, the National Collegiate Athletic Association had implemented a successful weight management program which was proven to improve weight management behavior. No similar program has ever been discussed by judo federations even though judo competitors present a comparable inappropriate pattern of weight control. In view of this, the basis for a weight control program is provided in this manuscript, as follows: competition should begin within 1 hour after weigh-in, at the latest; each athlete is allowed to be weighed-in only once; rapid weight loss as well as artificial rehydration (i.e., saline infusion) methods are prohibited during the entire competition day; athletes should pass the hydration test to get their weigh-in validated; an individual minimum competitive weight (male athletes competing at no less than 7% and females at no less than 12% of body fat) should be determined at the beginning of each season; athletes are not allowed to compete in any weight class that requires weight reductions greater than 1.5% of body weight per week. In parallel, educational programs should aim at increasing the athletes', coaches' and parents' awareness about the risks of aggressive nutritional strategies as well as healthier ways to properly manage body weight.
Resumo:
Abstract Background To determine the possible genotoxic effect of exposure to the smoke generated by biomass burning on workers involved in manual sugar cane harvesting. Methods The frequency of micronuclei in exfoliated buccal cells and peripheral blood lymphocytes was determined in sugarcane workers in the Barretos region of Brazil, during the harvest season and compared to a control population, comprised of administrative employees of Barretos Cancer Hospital. Results The frequency of micronuclei was higher in the sugar cane workers. The mean frequency in blood lymphocytes (micronuclei/1000 cells) in the test group was 8.22 versus 1.27 in the control group. The same effect was observed when exfoliated buccal cells were considered (22.75 and 9.70 micronuclei/1000 cells for sugar cane workers and controls, respectively). Conclusion Exposure to emissions produced by the burning of sugar cane during harvesting induces genomic instability in workers, indicating the necessity of adopting more advanced techniques of harvesting sugar cane to preserve human health.
Resumo:
Calificación: Matrícula de Honor
Resumo:
[EN]Oceanic eddy generation by tall deep-water islands is common phenomenon. It is recognized that these eddies may have a significant impact on the marine system and related biogeochemical fluxes. Hence, it is important to establish favourable conditions for their generation. With this objective, we present an observational study on eddy generation mechanisms by tall deep-water islands, using as a case study the island of Gran Canaria. Observations show that the main generation mechanism is topographic forcing, which leads to eddy generation when the incident oceanic flow is sufficiently intense. Wind shear at the island wake may acts only as an additional eddy-generation trigger mechanism when the impinging oceanic flow is not sufficiently intense. For the case of the island of Gran Canaria we have observed a mean of ten generated cyclonic eddies per year. Eddies are more frequently generated in summer coinciding with intense Trade winds and Canary Current.
Resumo:
The quality of temperature and humidity retrievals from the infrared SEVIRI sensors on the geostationary Meteosat Second Generation (MSG) satellites is assessed by means of a one dimensional variational algorithm. The study is performed with the aim of improving the spatial and temporal resolution of available observations to feed analysis systems designed for high resolution regional scale numerical weather prediction (NWP) models. The non-hydrostatic forecast model COSMO (COnsortium for Small scale MOdelling) in the ARPA-SIM operational configuration is used to provide background fields. Only clear sky observations over sea are processed. An optimised 1D–VAR set-up comprising of the two water vapour and the three window channels is selected. It maximises the reduction of errors in the model backgrounds while ensuring ease of operational implementation through accurate bias correction procedures and correct radiative transfer simulations. The 1D–VAR retrieval quality is firstly quantified in relative terms employing statistics to estimate the reduction in the background model errors. Additionally the absolute retrieval accuracy is assessed comparing the analysis with independent radiosonde and satellite observations. The inclusion of satellite data brings a substantial reduction in the warm and dry biases present in the forecast model. Moreover it is shown that the retrieval profiles generated by the 1D–VAR are well correlated with the radiosonde measurements. Subsequently the 1D–VAR technique is applied to two three–dimensional case–studies: a false alarm case–study occurred in Friuli–Venezia–Giulia on the 8th of July 2004 and a heavy precipitation case occurred in Emilia–Romagna region between 9th and 12th of April 2005. The impact of satellite data for these two events is evaluated in terms of increments in the integrated water vapour and saturation water vapour over the column, in the 2 meters temperature and specific humidity and in the surface temperature. To improve the 1D–VAR technique a method to calculate flow–dependent model error covariance matrices is also assessed. The approach employs members from an ensemble forecast system generated by perturbing physical parameterisation schemes inside the model. The improved set–up applied to the case of 8th of July 2004 shows a substantial neutral impact.
Resumo:
Since the first underground nuclear explosion, carried out in 1958, the analysis of seismic signals generated by these sources has allowed seismologists to refine the travel times of seismic waves through the Earth and to verify the accuracy of the location algorithms (the ground truth for these sources was often known). Long international negotiates have been devoted to limit the proliferation and testing of nuclear weapons. In particular the Treaty for the comprehensive nuclear test ban (CTBT), was opened to signatures in 1996, though, even if it has been signed by 178 States, has not yet entered into force, The Treaty underlines the fundamental role of the seismological observations to verify its compliance, by detecting and locating seismic events, and identifying the nature of their sources. A precise definition of the hypocentral parameters represents the first step to discriminate whether a given seismic event is natural or not. In case that a specific event is retained suspicious by the majority of the State Parties, the Treaty contains provisions for conducting an on-site inspection (OSI) in the area surrounding the epicenter of the event, located through the International Monitoring System (IMS) of the CTBT Organization. An OSI is supposed to include the use of passive seismic techniques in the area of the suspected clandestine underground nuclear test. In fact, high quality seismological systems are thought to be capable to detect and locate very weak aftershocks triggered by underground nuclear explosions in the first days or weeks following the test. This PhD thesis deals with the development of two different seismic location techniques: the first one, known as the double difference joint hypocenter determination (DDJHD) technique, is aimed at locating closely spaced events at a global scale. The locations obtained by this method are characterized by a high relative accuracy, although the absolute location of the whole cluster remains uncertain. We eliminate this problem introducing a priori information: the known location of a selected event. The second technique concerns the reliable estimates of back azimuth and apparent velocity of seismic waves from local events of very low magnitude recorded by a trypartite array at a very local scale. For the two above-mentioned techniques, we have used the crosscorrelation technique among digital waveforms in order to minimize the errors linked with incorrect phase picking. The cross-correlation method relies on the similarity between waveforms of a pair of events at the same station, at the global scale, and on the similarity between waveforms of the same event at two different sensors of the try-partite array, at the local scale. After preliminary tests on the reliability of our location techniques based on simulations, we have applied both methodologies to real seismic events. The DDJHD technique has been applied to a seismic sequence occurred in the Turkey-Iran border region, using the data recorded by the IMS. At the beginning, the algorithm was applied to the differences among the original arrival times of the P phases, so the cross-correlation was not used. We have obtained that the relevant geometrical spreading, noticeable in the standard locations (namely the locations produced by the analysts of the International Data Center (IDC) of the CTBT Organization, assumed as our reference), has been considerably reduced by the application of our technique. This is what we expected, since the methodology has been applied to a sequence of events for which we can suppose a real closeness among the hypocenters, belonging to the same seismic structure. Our results point out the main advantage of this methodology: the systematic errors affecting the arrival times have been removed or at least reduced. The introduction of the cross-correlation has not brought evident improvements to our results: the two sets of locations (without and with the application of the cross-correlation technique) are very similar to each other. This can be commented saying that the use of the crosscorrelation has not substantially improved the precision of the manual pickings. Probably the pickings reported by the IDC are good enough to make the random picking error less important than the systematic error on travel times. As a further justification for the scarce quality of the results given by the cross-correlation, it should be remarked that the events included in our data set don’t have generally a good signal to noise ratio (SNR): the selected sequence is composed of weak events ( magnitude 4 or smaller) and the signals are strongly attenuated because of the large distance between the stations and the hypocentral area. In the local scale, in addition to the cross-correlation, we have performed a signal interpolation in order to improve the time resolution. The algorithm so developed has been applied to the data collected during an experiment carried out in Israel between 1998 and 1999. The results pointed out the following relevant conclusions: a) it is necessary to correlate waveform segments corresponding to the same seismic phases; b) it is not essential to select the exact first arrivals; and c) relevant information can be also obtained from the maximum amplitude wavelet of the waveforms (particularly in bad SNR conditions). Another remarkable point of our procedure is that its application doesn’t demand a long time to process the data, and therefore the user can immediately check the results. During a field survey, such feature will make possible a quasi real-time check allowing the immediate optimization of the array geometry, if so suggested by the results at an early stage.
Resumo:
Process algebraic architectural description languages provide a formal means for modeling software systems and assessing their properties. In order to bridge the gap between system modeling and system im- plementation, in this thesis an approach is proposed for automatically generating multithreaded object-oriented code from process algebraic architectural descriptions, in a way that preserves – under certain assumptions – the properties proved at the architectural level. The approach is divided into three phases, which are illustrated by means of a running example based on an audio processing system. First, we develop an architecture-driven technique for thread coordination management, which is completely automated through a suitable package. Second, we address the translation of the algebraically-specified behavior of the individual software units into thread templates, which will have to be filled in by the software developer according to certain guidelines. Third, we discuss performance issues related to the suitability of synthesizing monitors rather than threads from software unit descriptions that satisfy specific constraints. In addition to the running example, we present two case studies about a video animation repainting system and the implementation of a leader election algorithm, in order to summarize the whole approach. The outcome of this thesis is the implementation of the proposed approach in a translator called PADL2Java and its integration in the architecture-centric verification tool TwoTowers.
Resumo:
[EN]The meccano method is a novel and promising mesh generation method for simultaneously creating adaptive tetrahedral meshes and volume parametrizations of a complex solid. We highlight the fact that the method requires minimum user intervention and has a low computational cost. The method builds a 3-D triangulation of the solid as a deformation of an appropriate tetrahedral mesh of the meccano. The new mesh generator combines an automatic parametrization of surface triangulations, a local refinement algorithm for 3-D nested triangulations and a simultaneous untangling and smoothing procedure. At present, the procedure is fully automatic for a genus-zero solid. In this case, the meccano can be a single cube. The efficiency of the proposed technique is shown with several applications...
Resumo:
Aim of the research: to develop a prototype of homogeneous high-throughput screening (HTS) for identification of novel integrin antagonists for the treatment of ocular allergy and to better understand the mechanisms of action of integrin-mediated levocabastine antiallergic action. Results: This thesis provides evidence that adopting scintillation proximity assay (SPA) levocabastine (IC50=406 mM), but not the first-generation antihistamine chlorpheniramine, displaces [125I]fibronectin (FN) binding to human a4b1 integrin. This result is supported by flow cytometry analysis, where levocabastine antagonizes the binding of a primary antibody to integrin a4 expressed in Jurkat E6.1 cells. Levocabastine, but not chlorpheniramine, binds to a4b1 integrin and prevents eosinophil adhesion to VCAM-1, FN or human umbilical vein endothelial cells (HUVEC) cultured in vitro. Similarly, levocabastine affects aLb2/ICAM-1-mediated adhesion of Jurkat E6.1 cells. Analyzing the supernatant of TNF-a-treated (24h) eosinophilic cells (EoL-1), we report that levocabastine reduces the TNF-a-induced release of the cytokines IL-12p40, IL-8 and VEGF. Finally, in a model of allergic conjunctivitis, levocastine eye drops (0.05%) reduced the clinical aspects of the early and late phase reactions and the conjunctival expression of a4b1 integrin by reducing infiltrated eosinophils. Conclusions: SPA is a highly efficient, amenable to automation and robust binding assay to screen novel integrin antagonists in a HTS setting. We propose that blockade of integrinmediated cell adhesion might be a target of the anti-allergic action of levocabastine and may play a role in preventing eosinophil adhesion and infiltration in allergic conjunctivitis.
Resumo:
The work undertaken in this PhD thesis is aimed at the development and testing of an innovative methodology for the assessment of the vulnerability of coastal areas to marine catastrophic inundation (tsunami). Different approaches are used at different spatial scales and are applied to three different study areas: 1. The entire western coast of Thailand 2. Two selected coastal suburbs of Sydney – Australia 3. The Aeolian Islands, in the South Tyrrhenian Sea – Italy I have discussed each of these cases study in at least one scientific paper: one paper about the Thailand case study (Dall’Osso et al., in review-b), three papers about the Sydney applications (Dall’Osso et al., 2009a; Dall’Osso et al., 2009b; Dall’Osso and Dominey-Howes, in review) and one last paper about the work at the Aeolian Islands (Dall’Osso et al., in review-a). These publications represent the core of the present PhD thesis. The main topics dealt with are outlined and discussed in a general introduction while the overall conclusions are outlined in the last section.