993 resultados para RAY-TRACING ALGORITHM
Resumo:
The standard one-machine scheduling problem consists in schedulinga set of jobs in one machine which can handle only one job at atime, minimizing the maximum lateness. Each job is available forprocessing at its release date, requires a known processing timeand after finishing the processing, it is delivery after a certaintime. There also can exists precedence constraints between pairsof jobs, requiring that the first jobs must be completed beforethe second job can start. An extension of this problem consistsin assigning a time interval between the processing of the jobsassociated with the precedence constrains, known by finish-starttime-lags. In presence of this constraints, the problem is NP-hardeven if preemption is allowed. In this work, we consider a specialcase of the one-machine preemption scheduling problem with time-lags, where the time-lags have a chain form, and propose apolynomial algorithm to solve it. The algorithm consist in apolynomial number of calls of the preemption version of the LongestTail Heuristic. One of the applicability of the method is to obtainlower bounds for NP-hard one-machine and job-shop schedulingproblems. We present some computational results of thisapplication, followed by some conclusions.
Resumo:
In this paper we propose a Pyramidal Classification Algorithm,which together with an appropriate aggregation index producesan indexed pseudo-hierarchy (in the strict sense) withoutinversions nor crossings. The computer implementation of thealgorithm makes it possible to carry out some simulation testsby Monte Carlo methods in order to study the efficiency andsensitivity of the pyramidal methods of the Maximum, Minimumand UPGMA. The results shown in this paper may help to choosebetween the three classification methods proposed, in order toobtain the classification that best fits the original structureof the population, provided we have an a priori informationconcerning this structure.
Resumo:
We present a simple randomized procedure for the prediction of a binary sequence. The algorithm uses ideas from recent developments of the theory of the prediction of individual sequences. We show that if thesequence is a realization of a stationary and ergodic random process then the average number of mistakes converges, almost surely, to that of the optimum, given by the Bayes predictor.
Resumo:
This paper compares two well known scan matching algorithms: the MbICP and the pIC. As a result of the study, it is proposed the MSISpIC, a probabilistic scan matching algorithm for the localization of an Autonomous Underwater Vehicle (AUV). The technique uses range scans gathered with a Mechanical Scanning Imaging Sonar (MSIS), and the robot displacement estimated through dead-reckoning with the help of a Doppler Velocity Log (DVL) and a Motion Reference Unit (MRU). The proposed method is an extension of the pIC algorithm. Its major contribution consists in: 1) using an EKF to estimate the local path traveled by the robot while grabbing the scan as well as its uncertainty and 2) proposing a method to group into a unique scan, with a convenient uncertainty model, all the data grabbed along the path described by the robot. The algorithm has been tested on an AUV guided along a 600m path within a marina environment with satisfactory results
Resumo:
Nominal Unification is an extension of first-order unification where terms can contain binders and unification is performed modulo α equivalence. Here we prove that the existence of nominal unifiers can be decided in quadratic time. First, we linearly-reduce nominal unification problems to a sequence of freshness and equalities between atoms, modulo a permutation, using ideas as Paterson and Wegman for first-order unification. Second, we prove that solvability of these reduced problems may be checked in quadràtic time. Finally, we point out how using ideas of Brown and Tarjan for unbalanced merging, we could solve these reduced problems more efficiently
Resumo:
Summary Background: We previously derived a clinical prognostic algorithm to identify patients with pulmonary embolism (PE) who are at low-risk of short-term mortality who could be safely discharged early or treated entirely in an outpatient setting. Objectives: To externally validate the clinical prognostic algorithm in an independent patient sample. Methods: We validated the algorithm in 983 consecutive patients prospectively diagnosed with PE at an emergency department of a university hospital. Patients with none of the algorithm's 10 prognostic variables (age >/= 70 years, cancer, heart failure, chronic lung disease, chronic renal disease, cerebrovascular disease, pulse >/= 110/min., systolic blood pressure < 100 mm Hg, oxygen saturation < 90%, and altered mental status) at baseline were defined as low-risk. We compared 30-day overall mortality among low-risk patients based on the algorithm between the validation and the original derivation sample. We also assessed the rate of PE-related and bleeding-related mortality among low-risk patients. Results: Overall, the algorithm classified 16.3% of patients with PE as low-risk. Mortality at 30 days was 1.9% among low-risk patients and did not differ between the validation and the original derivation sample. Among low-risk patients, only 0.6% died from definite or possible PE, and 0% died from bleeding. Conclusions: This study validates an easy-to-use, clinical prognostic algorithm for PE that accurately identifies patients with PE who are at low-risk of short-term mortality. Low-risk patients based on our algorithm are potential candidates for less costly outpatient treatment.
Resumo:
The development and tests of an iterative reconstruction algorithm for emission tomography based on Bayesian statistical concepts are described. The algorithm uses the entropy of the generated image as a prior distribution, can be accelerated by the choice of an exponent, and converges uniformly to feasible images by the choice of one adjustable parameter. A feasible image has been defined as one that is consistent with the initial data (i.e. it is an image that, if truly a source of radiation in a patient, could have generated the initial data by the Poisson process that governs radioactive disintegration). The fundamental ideas of Bayesian reconstruction are discussed, along with the use of an entropy prior with an adjustable contrast parameter, the use of likelihood with data increment parameters as conditional probability, and the development of the new fast maximum a posteriori with entropy (FMAPE) Algorithm by the successive substitution method. It is shown that in the maximum likelihood estimator (MLE) and FMAPE algorithms, the only correct choice of initial image for the iterative procedure in the absence of a priori knowledge about the image configuration is a uniform field.
Resumo:
The possible association between the microquasar LS 5039 and the EGRET source 3EG J1824-1514 suggests that microquasars could also be sources of high energy gamma-rays. In this paper, we explore, with a detailed numerical model, if this system can produce the emission detected by EGRET (>100 MeV) through inverse Compton (IC) scattering. Our numerical approach considers a population of relativistic electrons entrained in a cylindrical inhomogeneous jet, interacting with both the radiation and the magnetic fields, taking into account the Thomson and Klein-Nishina regimes of interaction. The computed spectrum reproduces the observed spectral characteristics at very high energy.
Resumo:
Galactic microquasars are certainly one of the most recent additions to the field of high energy astrophysics and have attracted increasing interest over the last decade. However, the high energy part of the spectrum of microquasars is the most poorly known, mainly due the lack of sensitive instrumentation in the past. Microquasars are now primary targets for all of the observatories working in the X-ray and gamma-ray domains. They also appear as the possible counterparts for some of the unidentified sources of high-energy gamma-rays detected by the experiment EGRET on board the satellite COMPTON-GRO. This paper provides a general review of the main observational results obtained up to now as well as a summary of the scenarios for production of high-energy gamma-rays at the present moment.
Resumo:
The possible associations between the microquasars LS 5039 and LS I +61 303 and the EGRET sources 3EG J1824-1514 and 3EG J0241+6103 suggest that microquasars could also be sources of high-energy gamma-rays. In this work, we present a detailed numerical inverse Compton (IC) model, based on a microquasar scenario, that reproduces the high-energy gamma-ray spectra and variability observed by EGRET for the mentioned sources. Our model considers a population of relativistic electrons entrained in a cylindrical inhomogeneous jet that interact through IC scattering with both the radiation and the magnetic fields.
Resumo:
We report millimetre-wave continuum observations of the X-ray binaries Cygnus X-3, SS 433, LSI+61 303, Cygnus X-1 and GRS 1915+105. The observations were carried out with the IRAM 30 m-antenna at 250 GHz (1.25 mm) from 1998 March 14 to March 20. These millimetre measurements are complemented with centimetre observations from the Ryle Telescope, at 15 GHz (2.0 cm) and from the Green Bank Interferometer at 2.25 and 8.3 GHz (13 and 3.6 cm). Both Cygnus X-3 and SS 433 underwent moderate flaring events during our observations, whose main spectral evolution properties are described and interpreted. A significant spectral steepening was observed in both sources during the flare decay, that is likely to be caused by adiabatic expansion, inverse Compton and synchrotron losses. Finally, we also report 250 GHz upper limits for three additional undetected X-ray binary stars: LSI+65 010, LSI+61 235 and X Per.
Resumo:
The MAGIC collaboration has searched for high-energy gamma-ray emission of some of the most promising pulsar candidates above an energy threshold of 50 GeV, an energy not reachable up to now by other ground-based instruments. Neither pulsed nor steady gamma-ray emission has been observed at energies of 100 GeV from the classical radio pulsars PSR J0205+6449 and PSR J2229+6114 (and their nebulae 3C58 and Boomerang, respectively) and the millisecond pulsar PSR J0218+4232. Here, we present the flux upper limits for these sources and discuss their implications in the context of current model predictions.
Resumo:
We present optical spectroscopy of MWC 656 and MWC 148, the proposed optical counterparts of the gamma-ray sources AGL J2241+4454 and HESS J0632+0 57, respectively. The main parameters of the Halpha emission line (EW, FWHM and centroid velocity) in these stars are modulated on the proposed orbital periods of 60.37 and 321 days, respectively. These modulations are likely produced by the resonant interaction of the Be discs with compact stars in eccentric orbits. We also present radial velocity curves of the optical stars folded on the above periods and obtain the first orbital elements of the two gamma-ray sources thus confirming their binary nature. Our orbital solution support eccentricities e~0.4 and 0.83+-0.08 for MWC 656 and MWC 148, respectively. Further, our orbital elements imply that the X-ray outbursts in HESS J0632+057/MWC 148 are delayed ~0.3 orbital phases after periastron passage, similarly to the case of LS I +61 303. In addition, the optical photometric light curve maxima in AGL J2241+4454/MWC 656 occur ~0.25 phases passed periastron, similar to what is seen in LS I +61 303. We also find that the orbital eccentricity is correlated with orbital period for the known gamma-ray binaries. This is explained by the fact that small stellar separations are required for the efficient triggering of VHE radiation. Another correlation between the EW of Halpha and orbital period is also observed, similarly to the case of Be/X-ray binaries. These correlations are useful to provide estimates of the key orbital parameters Porb and e from the Halpha line in future Be gamma-ray binary candidates.