26 resultados para Single commodity inventory problems
Resumo:
This paper presents a single precision floating point arithmetic unit with support for multiplication, addition, fused multiply-add, reciprocal, square-root and inverse squareroot with high-performance and low resource usage. The design uses a piecewise 2nd order polynomial approximation to implement reciprocal, square-root and inverse square-root. The unit can be configured with any number of operations and is capable to calculate any function with a throughput of one operation per cycle. The floatingpoint multiplier of the unit is also used to implement the polynomial approximation and the fused multiply-add operation. We have compared our implementation with other state-of-the-art proposals, including the Xilinx Core-Gen operators, and conclude that the approach has a high relative performance/area efficiency. © 2014 Technical University of Munich (TUM).
Resumo:
The urgent need to mitigate traffic problems such as accidents, road hazards, pollution and traffic jam have strongly driven the development of vehicular communications. DSRC (Dedicated Short Range Communications) is the technology of choice in vehicular communications, enabling real time information exchange among vehicles V2V (Vehicle-to-Vehicle) and between vehicles and infrastructure V2I (Vehicle-Infrastructure). This paper presents a receiving antenna for a single lane DSRC control unit. The antenna is a non-uniform array with five microstrip patches. The obtained beam width, bandwidth and circular polarization quality, among other characteristics, are compatible with the DSRC standards, making this antenna suitable for this application. © 2014 IEEE.
Resumo:
This paper presents a computational tool (PHEx) developed in Excel VBA for solving sizing and rating design problems involving Chevron type plate heat exchangers (PHE) with 1-pass-1-pass configuration. The rating methodology procedure used in the program is outlined, and a case study is presented with the purpose to show how the program can be used to develop sensitivity analysis to several dimensional parameters of PHE and to observe their effect on transferred heat and pressure drop.
Resumo:
A new method is proposed to control delayed transitions towards extinction in single population theoretical models with discrete time undergoing saddle-node bifurcations. The control method takes advantage of the delaying properties of the saddle remnant arising after the bifurcation, and allows to sustain populations indefinitely. Our method, which is shown to work for deterministic and stochastic systems, could generally be applied to avoid transitions tied to one-dimensional maps after saddle-node bifurcations.
Resumo:
Density-dependent effects, both positive or negative, can have an important impact on the population dynamics of species by modifying their population per-capita growth rates. An important type of such density-dependent factors is given by the so-called Allee effects, widely studied in theoretical and field population biology. In this study, we analyze two discrete single population models with overcompensating density-dependence and Allee effects due to predator saturation and mating limitation using symbolic dynamics theory. We focus on the scenarios of persistence and bistability, in which the species dynamics can be chaotic. For the chaotic regimes, we compute the topological entropy as well as the Lyapunov exponent under ecological key parameters and different initial conditions. We also provide co-dimension two bifurcation diagrams for both systems computing the periods of the orbits, also characterizing the period-ordering routes toward the boundary crisis responsible for species extinction via transient chaos. Our results show that the topological entropy increases as we approach to the parametric regions involving transient chaos, being maximum when the full shift R(L)(infinity) occurs, and the system enters into the essential extinction regime. Finally, we characterize analytically, using a complex variable approach, and numerically the inverse square-root scaling law arising in the vicinity of a saddle-node bifurcation responsible for the extinction scenario in the two studied models. The results are discussed in the context of species fragility under differential Allee effects. (C) 2011 Elsevier Ltd. All rights reserved.
Resumo:
We show that a light charged Higgs boson signal via tau(+/-)nu decay can be established at the Large Hadron Collider (LHC) also in the case of single top production. This process complements searches for the same signal in the case of charged Higgs bosons emerging from t (t) over bar production. The models accessible include the Minimal Supersymmetric Standard Model (MSSM) as well a variety of 2-Higgs Doublet Models (2HDMs). High energies and luminosities are however required, thereby restricting interest on this mode to the case of the LHC running at 14TeV with design configuration.
Resumo:
We present a generator for single top-quark production via flavour-changing neutral currents. The MEtop event generator allows for Next-to-Leading-Order direct top production pp -> t and Leading-Order production of several other single top processes. A few packages with definite sets of dimension six operators are available. We discuss how to improve the bounds on the effective operators and how well new physics can be probed with each set of independent dimension six operators.
Resumo:
Brain dopamine transporters imaging by Single Emission Tomography (SPECT) with 123I-FP-CIT (DaTScanTM) has become an important tool in the diagnosis and evaluation of Parkinson syndromes.This diagnostic method allows the visualization of a portion of the striatum – where healthy pattern resemble two symmetric commas - allowing the evaluation of dopamine presynaptic system, in which dopamine transporters are responsible for dopamine release into the synaptic cleft, and their reabsorption into the nigrostriatal nerve terminals, in order to be stored or degraded. In daily practice for assessment of DaTScan TM, it is common to rely only on visual assessment for diagnosis. However, this process is complex and subjective as it depends on the observer’s experience and it is associated with high variability intra and inter observer. Studies have shown that semiquantification can improve the diagnosis of Parkinson syndromes. For semiquantification, analysis methods of image segmentation using regions of interest (ROI) are necessary. ROIs are drawn, in specific - striatum - and in nonspecific – background – uptake areas. Subsequently, specific binding ratios are calculated. Low adherence of semiquantification for diagnosis of Parkinson syndromes is related, not only with the associated time spent, but also with the need of an adapted database of reference values for the population concerned, as well as, the examination of each service protocol. Studies have concluded, that this process increases the reproducibility of semiquantification. The aim of this investigation was to create and validate a database of healthy controls for Dopamine transporters with DaTScanTM named DBRV. The created database has been adapted to the Nuclear Medicine Department’s protocol, and the population of Infanta Cristina’s Hospital located in Badajoz, Spain.
Resumo:
Introduction: University students are frequently exposed to events that can cause stress and anxiety, producing elevated cardiovascular responses. Repeated exposure to academic stress has implications to students’ success and well-being and may contribute to the development of long-term health problems. Objective: To identify stress levels and coping strategies in university students and assess the impact of stress experience in heart rate variability (HRV). Methods: 17 university students, 19-23 years, completed the University Students Stress Inventory, the Depression Anxiety Stress Scales and the Ways of Coping Questionnaire. Two 24h-Holter recordings were performed, on academic activity days, including one of them an exam situation. Results: Students tend to present moderate stress levels, and prefer problem-focused coping strategies in order to manage stress. Exam situations are perceived as significant stressors. Although we found no significant differences in HRV (SDNN), between days with and without an exam, we registered a lower SDNN score and a variation in heart rate (HR) related to exam situation (maximum HR peak at 10 minutes before the exam, and total HR recovery 20 minutes after the exam), reflecting sympathetic activation due to stress. Conclusions: These results suggest that academic events, especially those related to exam situations, are the cause of stress in university students, with implications at cardiovascular level, underlying the importance of interventions that help these students improve their coping skills and optimize stress management, in order to improve academic achievement and promote well-being and quality of life.
Resumo:
Beam-like structures are the most common components in real engineering, while single side damage is often encountered. In this study, a numerical analysis of single side damage in a free-free beam is analysed with three different finite element models; namely solid, shell and beam models for demonstrating their performance in simulating real structures. Similar to experiment, damage is introduced into one side of the beam, and natural frequencies are extracted from the simulations and compared with experimental and analytical results. Mode shapes are also analysed with modal assurance criterion. The results from simulations reveal a good performance of the three models in extracting natural frequencies, and solid model performs better than shell while shell model performs better than beam model under intact state. For damaged states, the natural frequencies captured from solid model show more sensitivity to damage severity than shell model and shell model performs similar to the beam model in distinguishing damage. The main contribution of this paper is to perform a comparison between three finite element models and experimental data as well as analytical solutions. The finite element results show a relatively well performance.
Resumo:
Hyperspectral remote sensing exploits the electromagnetic scattering patterns of the different materials at specific wavelengths [2, 3]. Hyperspectral sensors have been developed to sample the scattered portion of the electromagnetic spectrum extending from the visible region through the near-infrared and mid-infrared, in hundreds of narrow contiguous bands [4, 5]. The number and variety of potential civilian and military applications of hyperspectral remote sensing is enormous [6, 7]. Very often, the resolution cell corresponding to a single pixel in an image contains several substances (endmembers) [4]. In this situation, the scattered energy is a mixing of the endmember spectra. A challenging task underlying many hyperspectral imagery applications is then decomposing a mixed pixel into a collection of reflectance spectra, called endmember signatures, and the corresponding abundance fractions [8–10]. Depending on the mixing scales at each pixel, the observed mixture is either linear or nonlinear [11, 12]. Linear mixing model holds approximately when the mixing scale is macroscopic [13] and there is negligible interaction among distinct endmembers [3, 14]. If, however, the mixing scale is microscopic (or intimate mixtures) [15, 16] and the incident solar radiation is scattered by the scene through multiple bounces involving several endmembers [17], the linear model is no longer accurate. Linear spectral unmixing has been intensively researched in the last years [9, 10, 12, 18–21]. It considers that a mixed pixel is a linear combination of endmember signatures weighted by the correspondent abundance fractions. Under this model, and assuming that the number of substances and their reflectance spectra are known, hyperspectral unmixing is a linear problem for which many solutions have been proposed (e.g., maximum likelihood estimation [8], spectral signature matching [22], spectral angle mapper [23], subspace projection methods [24,25], and constrained least squares [26]). In most cases, the number of substances and their reflectances are not known and, then, hyperspectral unmixing falls into the class of blind source separation problems [27]. Independent component analysis (ICA) has recently been proposed as a tool to blindly unmix hyperspectral data [28–31]. ICA is based on the assumption of mutually independent sources (abundance fractions), which is not the case of hyperspectral data, since the sum of abundance fractions is constant, implying statistical dependence among them. This dependence compromises ICA applicability to hyperspectral images as shown in Refs. [21, 32]. In fact, ICA finds the endmember signatures by multiplying the spectral vectors with an unmixing matrix, which minimizes the mutual information among sources. If sources are independent, ICA provides the correct unmixing, since the minimum of the mutual information is obtained only when sources are independent. This is no longer true for dependent abundance fractions. Nevertheless, some endmembers may be approximately unmixed. These aspects are addressed in Ref. [33]. Under the linear mixing model, the observations from a scene are in a simplex whose vertices correspond to the endmembers. Several approaches [34–36] have exploited this geometric feature of hyperspectral mixtures [35]. Minimum volume transform (MVT) algorithm [36] determines the simplex of minimum volume containing the data. The method presented in Ref. [37] is also of MVT type but, by introducing the notion of bundles, it takes into account the endmember variability usually present in hyperspectral mixtures. The MVT type approaches are complex from the computational point of view. Usually, these algorithms find in the first place the convex hull defined by the observed data and then fit a minimum volume simplex to it. For example, the gift wrapping algorithm [38] computes the convex hull of n data points in a d-dimensional space with a computational complexity of O(nbd=2cþ1), where bxc is the highest integer lower or equal than x and n is the number of samples. The complexity of the method presented in Ref. [37] is even higher, since the temperature of the simulated annealing algorithm used shall follow a log( ) law [39] to assure convergence (in probability) to the desired solution. Aiming at a lower computational complexity, some algorithms such as the pixel purity index (PPI) [35] and the N-FINDR [40] still find the minimum volume simplex containing the data cloud, but they assume the presence of at least one pure pixel of each endmember in the data. This is a strong requisite that may not hold in some data sets. In any case, these algorithms find the set of most pure pixels in the data. PPI algorithm uses the minimum noise fraction (MNF) [41] as a preprocessing step to reduce dimensionality and to improve the signal-to-noise ratio (SNR). The algorithm then projects every spectral vector onto skewers (large number of random vectors) [35, 42,43]. The points corresponding to extremes, for each skewer direction, are stored. A cumulative account records the number of times each pixel (i.e., a given spectral vector) is found to be an extreme. The pixels with the highest scores are the purest ones. N-FINDR algorithm [40] is based on the fact that in p spectral dimensions, the p-volume defined by a simplex formed by the purest pixels is larger than any other volume defined by any other combination of pixels. This algorithm finds the set of pixels defining the largest volume by inflating a simplex inside the data. ORA SIS [44, 45] is a hyperspectral framework developed by the U.S. Naval Research Laboratory consisting of several algorithms organized in six modules: exemplar selector, adaptative learner, demixer, knowledge base or spectral library, and spatial postrocessor. The first step consists in flat-fielding the spectra. Next, the exemplar selection module is used to select spectral vectors that best represent the smaller convex cone containing the data. The other pixels are rejected when the spectral angle distance (SAD) is less than a given thresh old. The procedure finds the basis for a subspace of a lower dimension using a modified Gram–Schmidt orthogonalizati on. The selected vectors are then projected onto this subspace and a simplex is found by an MV T pro cess. ORA SIS is oriented to real-time target detection from uncrewed air vehicles using hyperspectral data [46]. In this chapter we develop a new algorithm to unmix linear mixtures of endmember spectra. First, the algorithm determines the number of endmembers and the signal subspace using a newly developed concept [47, 48]. Second, the algorithm extracts the most pure pixels present in the data. Unlike other methods, this algorithm is completely automatic and unsupervised. To estimate the number of endmembers and the signal subspace in hyperspectral linear mixtures, the proposed scheme begins by estimating sign al and noise correlation matrices. The latter is based on multiple regression theory. The signal subspace is then identified by selectin g the set of signal eigenvalue s that best represents the data, in the least-square sense [48,49 ], we note, however, that VCA works with projected and with unprojected data. The extraction of the end members exploits two facts: (1) the endmembers are the vertices of a simplex and (2) the affine transformation of a simplex is also a simplex. As PPI and N-FIND R algorithms, VCA also assumes the presence of pure pixels in the data. The algorithm iteratively projects data on to a direction orthogonal to the subspace spanned by the endmembers already determined. The new end member signature corresponds to the extreme of the projection. The algorithm iterates until all end members are exhausted. VCA performs much better than PPI and better than or comparable to N-FI NDR; yet it has a computational complexity between on e and two orders of magnitude lower than N-FINDR. The chapter is structure d as follows. Section 19.2 describes the fundamentals of the proposed method. Section 19.3 and Section 19.4 evaluate the proposed algorithm using simulated and real data, respectively. Section 19.5 presents some concluding remarks.