910 resultados para Random Regret Minimization
Resumo:
Researchers conducted investigations to demonstrate the advantages of random distributed feedback fiber laser. Random lasers had advantages, such as simple technology that did not require a precise microcavity and low production cost. The properties of their output radiation were special in comparison to those of conventional lasers and they were characterized by complex features in the spatial, spectral, and time domains. The researchers demonstrated a new type of one-dimensional laser with random distributed feedback based on Rayleigh scattering (RS) that was presented in any transparent glass medium due to natural inhomogeneities of refractive index. The cylindrical fiber waveguide geometry provided transverse confinement, while the cavity was open in the longitudinal direction and did not include any regular point-action reflectors.
Resumo:
We demonstrate lasing based on a random distributed feedback due to the Raman amplified Rayleigh backscattering in different types of cavities with and without conventional point-action reflectors. Quasistationary generation of a narrowband spectrum is achieved despite the random nature of the feedback. The generated spectrum is localized at the reflection or gain spectral maxima in schemes with and without point reflectors, respectively. The length limit for a conventional cavity and the minimal pump power required for the lasing based purely on a random distributed feedback are determined. © 2010 The American Physical Society.
Resumo:
Using the risk measure CV aR in �nancial analysis has become more and more popular recently. In this paper we apply CV aR for portfolio optimization. The problem is formulated as a two-stage stochastic programming model, and the SRA algorithm, a recently developed heuristic algorithm, is applied for minimizing CV aR.
Resumo:
A CV aR kockázati mérték egyre nagyobb jelentőségre tesz szert portfóliók kockázatának megítélésekor. A portfolió egészére a CVaR kockázati mérték minimalizálását meg lehet fogalmazni kétlépcsős sztochasztikus feladatként. Az SRA algoritmus egy mostanában kifejlesztett megoldó algoritmus sztochasztikus programozási feladatok optimalizálására. Ebben a cikkben az SRA algoritmussal oldottam meg CV aR kockázati mérték minimalizálást. ___________ The risk measure CVaR is becoming more and more popular in recent years. In this paper we use CVaR for portfolio optimization. We formulate the problem as a two-stage stochastic programming model. We apply the SRA algorithm, which is a recently developed heuristic algorithm, to minimizing CVaR.
Resumo:
A job shop with one batch processing and several discrete machines is analyzed. Given a set of jobs, their process routes, processing requirements, and size, the objective is to schedule the jobs such that the makespan is minimized. The batch processing machine can process a batch of jobs as long as the machine capacity is not violated. The batch processing time is equal to the longest processing job in the batch. The problem under study can be represented as Jm:batch:Cmax. If no batches were formed, the scheduling problem under study reduces to the classical job shop scheduling problem (i.e. Jm:: Cmax), which is known to be NP-hard. This research extends the scheduling literature by combining Jm::Cmax with batch processing. The primary contributions are the mathematical formulation, a new network representation and several solution approaches. The problem under study is observed widely in metal working and other industries, but received limited or no attention due to its complexity. A novel network representation of the problem using disjunctive and conjunctive arcs, and a mathematical formulation are proposed to minimize the makespan. Besides that, several algorithms, like batch forming heuristics, dispatching rules, Modified Shifting Bottleneck, Tabu Search (TS) and Simulated Annealing (SA), were developed and implemented. An experimental study was conducted to evaluate the proposed heuristics, and the results were compared to those from a commercial solver (i.e., CPLEX). TS and SA, with the combination of MWKR-FF as the initial solution, gave the best solutions among all the heuristics proposed. Their results were close to CPLEX; and for some larger instances, with total operations greater than 225, they were competitive in terms of solution quality and runtime. For some larger problem instances, CPLEX was unable to report a feasible solution even after running for several hours. Between SA and the experimental study indicated that SA produced a better average Cmax for all instances. The solution approaches proposed will benefit practitioners to schedule a job shop (with both discrete and batch processing machines) more efficiently. The proposed solution approaches are easier to implement and requires short run times to solve large problem instances.
Resumo:
Cooperative communication has gained much interest due to its ability to exploit the broadcasting nature of the wireless medium to mitigate multipath fading. There has been considerable amount of research on how cooperative transmission can improve the performance of the network by focusing on the physical layer issues. During the past few years, the researchers have started to take into consideration cooperative transmission in routing and there has been a growing interest in designing and evaluating cooperative routing protocols. Most of the existing cooperative routing algorithms are designed to reduce the energy consumption; however, packet collision minimization using cooperative routing has not been addressed yet. This dissertation presents an optimization framework to minimize collision probability using cooperative routing in wireless sensor networks. More specifically, we develop a mathematical model and formulate the problem as a large-scale Mixed Integer Non-Linear Programming problem. We also propose a solution based on the branch and bound algorithm augmented with reducing the search space (branch and bound space reduction). The proposed strategy builds up the optimal routes from each source to the sink node by providing the best set of hops in each route, the best set of relays, and the optimal power allocation for the cooperative transmission links. To reduce the computational complexity, we propose two near optimal cooperative routing algorithms. In the first near optimal algorithm, we solve the problem by decoupling the optimal power allocation scheme from optimal route selection. Therefore, the problem is formulated by an Integer Non-Linear Programming, which is solved using a branch and bound space reduced method. In the second near optimal algorithm, the cooperative routing problem is solved by decoupling the transmission power and the relay node se- lection from the route selection. After solving the routing problems, the power allocation is applied in the selected route. Simulation results show the algorithms can significantly reduce the collision probability compared with existing cooperative routing schemes.
Resumo:
We analyze the far-field intensity distribution of binary phase gratings whose strips present certain randomness in their height. A statistical analysis based on the mutual coherence function is done in the plane just after the grating. Then, the mutual coherence function is propagated to the far field and the intensity distribution is obtained. Generally, the intensity of the diffraction orders decreases in comparison to that of the ideal perfect grating. Several important limit cases, such as low- and high-randomness perturbed gratings, are analyzed. In the high-randomness limit, the phase grating is equivalent to an amplitude grating plus a “halo.” Although these structures are not purely periodic, they behave approximately as a diffraction grating.
Resumo:
It was recently shown [Phys. Rev. Lett. 110, 227201 (2013)] that the critical behavior of the random-field Ising model in three dimensions is ruled by a single universality class. This conclusion was reached only after a proper taming of the large scaling corrections of the model by applying a combined approach of various techniques, coming from the zero-and positive-temperature toolboxes of statistical physics. In the present contribution we provide a detailed description of this combined scheme, explaining in detail the zero-temperature numerical scheme and developing the generalized fluctuation-dissipation formula that allowed us to compute connected and disconnected correlation functions of the model. We discuss the error evolution of our method and we illustrate the infinite limit-size extrapolation of several observables within phenomenological renormalization. We present an extension of the quotients method that allows us to obtain estimates of the critical exponent a of the specific heat of the model via the scaling of the bond energy and we discuss the self-averaging properties of the system and the algorithmic aspects of the maximum-flow algorithm used.
Resumo:
Acknowledgements The authors thank the crews, fishers, and scientists who conducted the various surveys from which data were obtained. This work was supported by the Government of South Georgia and South Sandwich Islands. Additional logistical support provided by The South Atlantic Environmental Research Institute, with thanks to Paul Brickle. PF receives funding from the MASTS pooling initiative (TheMarine Alliance for Science and Technology for Scotland), and their support is gratefully acknowledged. MASTS is funded by the Scottish Funding Council (grant reference HR09011) and contributing institutions. SF is funded by the Natural Environment Research Council, and data were provided from the British Antarctic Survey Ecosystems Long-term Monitoring and Surveys programme as part of the BAS Polar Science for Planet Earth Programme. The authors also thank the anonymous referees for their helpful suggestions on an earlier version of this manuscript.
Resumo:
Acknowledgements This study was possible by partial financial support from the following Brazilian government agencies: CNPq, CAPES, and FAPESP (2011/19296-1 and 2015/07311-7). We also wish thank Newton Fund and COFAP.
Resumo:
In this work, we obtain analytical expressions for the near-and far-field diffraction of random Ronchi diffraction gratings where the slits of the grating are randomly displaced around their periodical positions. We theoretically show that the effect of randomness in the position of the slits of the grating produces a decrease of the contrast and even disappearance of the self-images for high randomness level at the near field. On the other hand, it cancels high-order harmonics in far field, resulting in only a few central diffraction orders. Numerical simulations by means of the Rayleigh–Sommerfeld diffraction formula are performed in order to corroborate the analytical results. These results are of interest for industrial and technological applications where manufacture errors need to be considered.
Resumo:
We demonstrate a fibre laser with a mirrorless cavity that operates via Rayleigh scattering amplified through the Raman effect. The properties of such random distributed feedback laser appear different from those of both traditional random lasers and conventional fibre lasers. ©2010 IEEE.
Resumo:
Patient awareness and concern regarding the potential health risks from ionizing radiation have peaked recently (Coakley et al., 2011) following widespread press and media coverage of the projected cancer risks from the increasing use of computed tomography (CT) (Berrington et al., 2007). The typical young and educated patient with inflammatory bowel disease (IBD) may in particular be conscious of his/her exposure to ionising radiation as a result of diagnostic imaging. Cumulative effective doses (CEDs) in patients with IBD have been reported as being high and are rising, primarily due to the more widespread and repeated use of CT (Desmond et al., 2008). Radiologists, technologists, and referring physicians have a responsibility to firstly counsel their patients accurately regarding the actual risks of ionizing radiation exposure; secondly to limit the use of those imaging modalities which involve ionising radiation to clinical situations where they are likely to change management; thirdly to ensure that a diagnostic quality imaging examination is acquired with lowest possible radiation exposure. In this paper, we synopsize available evidence related to radiation exposure and risk and we report advances in low-dose CT technology and examine the role for alternative imaging modalities such as ultrasonography or magnetic resonance imaging which avoid radiation exposure.
Resumo:
This dissertation explores the complex interactions between organizational structure and the environment. In Chapter 1, I investigate the effect of financial development on the formation of European corporate groups. Since cross-country regressions are hard to interpret in a causal sense, we exploit exogenous industry measures to investigate a specific channel through which financial development may affect group affiliation: internal capital markets. Using a comprehensive firm-level dataset on European corporate groups in 15 countries, we find that countries
with less developed financial markets have a higher percentage of group affiliates in more capital intensive industries. This relationship is more pronounced for young and small firms and for affiliates of large and diversified groups. Our findings are consistent with the view that internal capital markets may, under some conditions, be more efficient than prevailing external markets, and that this may drive group affiliation even in developed economies. In Chapter 2, I bridge current streams of innovation research to explore the interplay between R&D, external knowledge, and organizational structure–three elements of a firm’s innovation strategy which we argue should logically be studied together. Using within-firm patent assignment patterns,
we develop a novel measure of structure for a large sample of American firms. We find that centralized firms invest more in research and patent more per R&D dollar than decentralized firms. Both types access technology via mergers and acquisitions, but their acquisitions differ in terms of frequency, size, and i\ntegration. Consistent with our framework, their sources of value creation differ: while centralized firms derive more value from internal R&D, decentralized firms rely more on external knowledge. We discuss how these findings should stimulate more integrative work on theories of innovation. In Chapter 3, I use novel data on 1,265 newly-public firms to show that innovative firms exposed to environments with lower M&A activity just after their initial public offering (IPO) adapt by engaging in fewer technological acquisitions and
more internal research. However, this adaptive response becomes inertial shortly after IPO and persists well into maturity. This study advances our understanding of how the environment shapes heterogeneity and capabilities through its impact on firm structure. I discuss how my results can help bridge inertial versus adaptive perspectives in the study of organizations, by
documenting an instance when the two interact.