183 resultados para Visual search method
em Biblioteca Digital da Produção Intelectual da Universidade de São Paulo (BDPI/USP)
Resumo:
In the present study, we compared 2 methods for collecting ixodid ticks on the verges of animal trails in a primary Amazon forest area in northern Brazil. (i) Dragging: This method was based on passing a 1-m(2) white flannel over the vegetation and checking the flannel for the presence of caught ticks every 5-10 m. (ii) Visual search: This method consisted of looking for guesting ticks on the tips of leaves of the vegetation bordering animal trails in the forest. A total of 103 adult ticks belonging to 4 Amblyomma species were collected by the visual search method on 5 collecting dates, while only 44 adult ticks belonging to 3 Amblyomma species were collected by dragging on 5 other collecting dates. These values were statistically different (Mann-Whitney Test, P = 0.0472). On the other hand, dragging was more efficient for subadult ticks, since no larva or nymph was collected by visual search, whereas 18 nymphs and 7 larvae were collected by dragging. The visual search method proved to be suitable for collecting adult ticks in the Amazon forest: however, field studies should include a second method, such as dragging in order to maximize the collection of subadult ticks. Indeed, these 2 methods can be performed by a single investigator at the same time, while he/she walks on an animal trail in the forest. (C) 2010 Elsevier GmbH. All rights reserved.
Resumo:
The aim was to investigate inter-tester and intra-tester reliability and parallel reliability between a visual assessment method and a method using a pachymeter for locating the mid-point of the patella in determining the medial/lateral patella orientation. Fifteen asymptomatic subjects were assessed and the mid-point of the patella was determined by both methods on two separate occasions two weeks apart. Inter-tester reliability was obtained by ANOVA and by intraclass correlation coefficient (ICC); intra-tester reliability was obtained by a paired t-test and ICC; and parallel reliability was obtained by Pearson`s Correlation and ICC, for the measurement on the first and second evaluations. There was acceptable inter-tester agreement (p = 0.490) and reliability for the visual inspection (ICC = 0.747) and for the pachymeter (ICC = 0.716) at the second evaluation. The inter-tester reliability in the first evaluation was unacceptable (visual ICC = 0.604; pachymeter ICC = 0.612). Although there was statistical similarity between measurements for the first and second evaluations for all testers, intra-tester reliability was not acceptable for both methods: visual (examiner 1 ICC = 0.175; examiner 2 ICC = 0.189; examiner 3 ICC = 0.155) and pachymeter (examiner 1 ICC = 0.214; examiner 2 ICC = 0.246; examiner 3 ICC = 0.069). Parallel reliability gave a perfect correlation at the first evaluation (r=0.828; p<0.001) and at the second (r=0.756; p<0.001) and reliability was between acceptable and very good (ICC = [0.748-0.813]). Both visual and pachymeter methods provide reliable and similar medial/lateral patella orientation and are reliable between different examiners, but the results between the two assessments at 2 weeks` interval demonstrated an unacceptable reliability. (C) 2009 Elsevier B.V. All rights reserved.
Resumo:
We propose a robust and low complexity scheme to estimate and track carrier frequency from signals traveling under low signal-to-noise ratio (SNR) conditions in highly nonstationary channels. These scenarios arise in planetary exploration missions subject to high dynamics, such as the Mars exploration rover missions. The method comprises a bank of adaptive linear predictors (ALP) supervised by a convex combiner that dynamically aggregates the individual predictors. The adaptive combination is able to outperform the best individual estimator in the set, which leads to a universal scheme for frequency estimation and tracking. A simple technique for bias compensation considerably improves the ALP performance. It is also shown that retrieval of frequency content by a fast Fourier transform (FFT)-search method, instead of only inspecting the angle of a particular root of the error predictor filter, enhances performance, particularly at very low SNR levels. Simple techniques that enforce frequency continuity improve further the overall performance. In summary we illustrate by extensive simulations that adaptive linear prediction methods render a robust and competitive frequency tracking technique.
Resumo:
A lot sizing and scheduling problem prevalent in small market-driven foundries is studied. There are two related decision levels: (I the furnace scheduling of metal alloy production, and (2) moulding machine planning which specifies the type and size of production lots. A mixed integer programming (MIP) formulation of the problem is proposed, but is impractical to solve in reasonable computing time for non-small instances. As a result, a faster relax-and-fix (RF) approach is developed that can also be used on a rolling horizon basis where only immediate-term schedules are implemented. As well as a MIP method to solve the basic RF approach, three variants of a local search method are also developed and tested using instances based on the literature. Finally, foundry-based tests with a real-order book resulted in a very substantial reduction of delivery delays and finished inventory, better use of capacity, and much faster schedule definition compared to the foundry`s own practice. (c) 2006 Elsevier Ltd. All rights reserved.
Resumo:
Conventional procedures employed in the modeling of viscoelastic properties of polymer rely on the determination of the polymer`s discrete relaxation spectrum from experimentally obtained data. In the past decades, several analytical regression techniques have been proposed to determine an explicit equation which describes the measured spectra. With a diverse approach, the procedure herein introduced constitutes a simulation-based computational optimization technique based on non-deterministic search method arisen from the field of evolutionary computation. Instead of comparing numerical results, this purpose of this paper is to highlight some Subtle differences between both strategies and focus on what properties of the exploited technique emerge as new possibilities for the field, In oder to illustrate this, essayed cases show how the employed technique can outperform conventional approaches in terms of fitting quality. Moreover, in some instances, it produces equivalent results With much fewer fitting parameters, which is convenient for computational simulation applications. I-lie problem formulation and the rationale of the highlighted method are herein discussed and constitute the main intended contribution. (C) 2009 Wiley Periodicals, Inc. J Appl Polym Sci 113: 122-135, 2009
Resumo:
We have developed a new procedure to search for carbon-enhanced metal-poor (CEMP) stars from the Hamburg/ESO (HES) prism-survey plates. This method employs an extended line index for the CH G band, which we demonstrate to have superior performance when compared to the narrower G-band index formerly employed to estimate G-band strengths for these spectra. Although CEMP stars have been found previously among candidate metal-poor stars selected from the HES, the selection on metallicity undersamples the population of intermediate-metallicity CEMP stars (-2.5 <= [Fe/H] <= -1.0); such stars are of importance for constraining the onset of the s-process in metal-deficient asymptotic giant branch stars (thought to be associated with the origin of carbon for roughly 80% of CEMP stars). The new candidates also include substantial numbers of warmer carbon-enhanced stars, which were missed in previous HES searches for carbon stars due to selection criteria that emphasized cooler stars. A first subsample, biased toward brighter stars (B < 15.5), has been extracted from the scanned HES plates. After visual inspection (to eliminate spectra compromised by plate defects, overlapping spectra, etc., and to carry out rough spectral classifications), a list of 669 previously unidentified candidate CEMP stars was compiled. Follow-up spectroscopy for a pilot sample of 132 candidates was obtained with the Goodman spectrograph on the SOAR 4.1 m telescope. Our results show that most of the observed stars lie in the targeted metallicity range, and possess prominent carbon absorption features at 4300 angstrom. The success rate for the identification of new CEMP stars is 43% (13 out of 30) for [Fe/H] < -2.0. For stars with [Fe/H] < -2.5, the ratio increases to 80% (four out of five objects), including one star with [Fe/H] < -3.0.
Resumo:
We measured the effects of epilepsy on visual contrast sensitivity to linear and vertical sine-wave gratings. Sixteen female adults, aged 21 to 50 years, comprised the sample in this study, including eight adults with generalized tonic-clonic seizure-type epilepsy and eight age-matched controls without epilepsy. Contrast threshold was measured using a temporal two-alternative forced-choice binocular psychophysical method at a distance of 150 cm from the stimuli, with a mean luminance of 40.1 cd/m². A one-way analysis of variance (ANOVA) applied to the linear contrast threshold showed significant differences between groups (F[3,188] = 14.829; p < .05). Adults with epilepsy had higher contrast thresholds (1.45, 1.04, and 1.18 times for frequencies of 0.25, 2.0, and 8.0 cycles per degree of visual angle, respectively). The Tukey Honestly Significant Difference post hoc test showed significant differences (p < .05) for all of the tested spatial frequencies. The largest difference between groups was in the lowest spatial frequency. Therefore, epilepsy may cause more damage to the neural pathways that process low spatial frequencies. However, epilepsy probably alters both the magnocellular visual pathway, which processes low spatial frequencies, and the parvocellular visual pathway, which processes high spatial frequencies. The experimental group had lower visual contrast sensitivity to all tested spatial frequencies.
Resumo:
Searches for field horizontal-branch (FHB) stars in the halo of the Galaxy in the past have been carried out by several techniques, such as objective-prism surveys and visual or infrared photometric surveys. By choosing adequate color criteria, it is possible to improve the efficiency of identifying bona fide FHB stars among the other objects that exhibit similar characteristics, such as main-sequence A-stars, blue stragglers, subdwarfs, etc. In this work, we report the results of a spectroscopic survey carried out near the south Galactic pole intended to validate FHB stars originally selected from the HK objective-prism survey of Beers and colleagues, based on near-infrared color indices. A comparison between the stellar spectra obtained in this survey with theoretical stellar atmosphere models allows us to determine T(eff), log g, and [Fe/H] for 13 stars in the sample. Stellar temperatures were calculated from measured (B-V)(o), when this measurement was available (16 stars). The color index criteria adopted in this work are shown to correctly classify 30% of the sample as FHB, 25% as non-FHB (main-sequence stars and subdwarfes), whereas 40% could not be distinguished between FHB and main-sequence stars. We compare the efficacy of different color criteria in the literature intended to select FHB stars, and discuss the use of the Mg II 4481 line to estimate the metallicity.
Resumo:
The relatively large number of nearby radio-quiet and thermally emitting isolated neutron stars (INSs) discovered in the ROSAT All-Sky Survey, dubbed the ""Magnificent Seven"", suggests that they belong to a formerly neglected major component of the overall INS population. So far, attempts to discover similar INSs beyond the solar vicinity failed to confirm any reliable candidate. The good positional accuracy and soft X-ray sensitivity of the EPIC cameras onboard the XMM-Newton satellite allow us to efficiently search for new thermally emitting INSs. We used the 2XMMp catalogue to select sources with no catalogued candidate counterparts and with X-ray spectra similar to those of the Magnificent Seven, but seen at greater distances and thus undergoing higher interstellar absorptions. Identifications in more than 170 astronomical catalogues and visual screening allowed us to select fewer than 30 good INS candidates. In order to rule out alternative identifications, we obtained deep ESO-VLT and SOAR optical imaging for the X-ray brightest candidates. We report here on the optical follow-up results of our search and discuss the possible nature of 8 of our candidates. A high X-ray-to-optical flux ratio together with a stable flux and soft X-ray spectrum make the brightest source of our sample, 2XMM J104608.7-594306, a newly discovered thermally emitting INS. The X-ray source 2XMM J010642.3+005032 has no evident optical counterpart and should be further investigated. The remaining X-ray sources are most probably identified with cataclysmic variables and active galactic nuclei, as inferred from the colours and flux ratios of their likely optical counterparts. Beyond the finding of new thermally emitting INSs, our study aims at constraining the space density of this Galactic population at great distances and at determining whether their apparently high density is a local anomaly or not.
Resumo:
This article presents an extensive investigation carried out in two technology-based companies of the So Carlos technological pole in Brazil. Based on this multiple case study and literature review, a method, entitled hereafter IVPM2, applying agile project management (APM) principles was developed. After the method implementation, a qualitative evaluation was carried out by a document analysis and questionnaire application. This article shows that the application of this method at the companies under investigation evidenced the benefits of using simple, iterative, visual, and agile techniques to plan and control innovative product projects combined with traditional project management best practices, such as standardization.
Resumo:
Inverse analysis is currently an important subject of study in several fields of science and engineering. The identification of physical and geometric parameters using experimental measurements is required in many applications. In this work a boundary element formulation to identify boundary and interface values as well as material properties is proposed. In particular the proposed formulation is dedicated to identifying material parameters when a cohesive crack model is assumed for 2D problems. A computer code is developed and implemented using the BEM multi-region technique and regularisation methods to perform the inverse analysis. Several examples are shown to demonstrate the efficiency of the proposed model. (C) 2010 Elsevier Ltd. All rights reserved,
Resumo:
High-density polyethylene resins have increasingly been used in the production of pipes for water- and gas-pressurized distribution systems and are expected to remain in service for several years, but they eventually fail prematurely by creep fracture. Usual standard methods used to rank resins in terms of their resistance to fracture are expensive and non-practical for quality control purposes, justifying the search for alternative methods. Essential work of fracture (EWF) method provides a relatively simple procedure to characterize the fracture behavior of ductile polymers, such as polyethylene resins. In the present work, six resins were analyzed using the EWF methodology. The results show that the plastic work dissipation factor, beta w(p), is the most reliable parameter to evaluate the performance. Attention must be given to specimen preparation that might result in excessive dispersion in the results, especially for the essential work of fracture w(e).
Resumo:
In this work, a wide analysis of local search multiuser detection (LS-MUD) for direct sequence/code division multiple access (DS/CDMA) systems under multipath channels is carried out considering the performance-complexity trade-off. It is verified the robustness of the LS-MUD to variations in loading, E(b)/N(0), near-far effect, number of fingers of the Rake receiver and errors in the channel coefficients estimates. A compared analysis of the bit error rate (BER) and complexity trade-off is accomplished among LS, genetic algorithm (GA) and particle swarm optimization (PSO). Based on the deterministic behavior of the LS algorithm, it is also proposed simplifications over the cost function calculation, obtaining more efficient algorithms (simplified and combined LS-MUD versions) and creating new perspectives for the MUD implementation. The computational complexity is expressed in terms of the number of operations in order to converge. Our conclusion pointed out that the simplified LS (s-LS) method is always more efficient, independent of the system conditions, achieving a better performance with a lower complexity than the others heuristics detectors. Associated to this, the deterministic strategy and absence of input parameters made the s-LS algorithm the most appropriate for the MUD problem. (C) 2008 Elsevier GmbH. All rights reserved.
Resumo:
Common sense tells us that the future is an essential element in any strategy. In addition, there is a good deal of literature on scenario planning, which is an important tool in considering the future in terms of strategy. However, in many organizations there is serious resistance to the development of scenarios, and they are not broadly implemented by companies. But even organizations that do not rely heavily on the development of scenarios do, in fact, construct visions to guide their strategies. But it might be asked, what happens when this vision is not consistent with the future? To address this problem, the present article proposes a method for checking the content and consistency of an organization`s vision of the future, no matter how it was conceived. The proposed method is grounded on theoretical concepts from the field of future studies, which are described in this article. This study was motivated by the search for developing new ways of improving and using scenario techniques as a method for making strategic decisions. The method was then tested on a company in the field of information technology in order to check its operational feasibility. The test showed that the proposed method is, in fact, operationally feasible and was capable of analyzing the vision of the company being studied, indicating both its shortcomings and points of inconsistency. (C) 2007 Elsevier Ltd. All rights reserved.
Resumo:
The image reconstruction using the EIT (Electrical Impedance Tomography) technique is a nonlinear and ill-posed inverse problem which demands a powerful direct or iterative method. A typical approach for solving the problem is to minimize an error functional using an iterative method. In this case, an initial solution close enough to the global minimum is mandatory to ensure the convergence to the correct minimum in an appropriate time interval. The aim of this paper is to present a new, simple and low cost technique (quadrant-searching) to reduce the search space and consequently to obtain an initial solution of the inverse problem of EIT. This technique calculates the error functional for four different contrast distributions placing a large prospective inclusion in the four quadrants of the domain. Comparing the four values of the error functional it is possible to get conclusions about the internal electric contrast. For this purpose, initially we performed tests to assess the accuracy of the BEM (Boundary Element Method) when applied to the direct problem of the EIT and to verify the behavior of error functional surface in the search space. Finally, numerical tests have been performed to verify the new technique.