936 resultados para Hold-up problem


Relevância:

30.00% 30.00%

Publicador:

Resumo:

A new method of frequency-shifting for a diode laser is realized. Using a sample-and-hold circuit, the error signal can be held by the circuit during frequency shifting. It can avoid the restraint of locking or even lock-losing caused by the servo circuit when we input a step-up voltage into piezoelectric transition (PZT) to achieve laser frequency-shifting.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The present work deals with the problem of the interaction of the electromagnetic radiation with a statistical distribution of nonmagnetic dielectric particles immersed in an infinite homogeneous isotropic, non-magnetic medium. The wavelength of the incident radiation can be less, equal or greater than the linear dimension of a particle. The distance between any two particles is several wavelengths. A single particle in the absence of the others is assumed to scatter like a Rayleigh-Gans particle, i.e. interaction between the volume elements (self-interaction) is neglected. The interaction of the particles is taken into account (multiple scattering) and conditions are set up for the case of a lossless medium which guarantee that the multiple scattering contribution is more important than the self-interaction one. These conditions relate the wavelength λ and the linear dimensions of a particle a and of the region occupied by the particles D. It is found that for constant λ/a, D is proportional to λ and that |Δχ|, where Δχ is the difference in the dielectric susceptibilities between particle and medium, has to lie within a certain range.

The total scattering field is obtained as a series the several terms of which represent the corresponding multiple scattering orders. The first term is a single scattering term. The ensemble average of the total scattering intensity is then obtained as a series which does not involve terms due to products between terms of different orders. Thus the waves corresponding to different orders are independent and their Stokes parameters add.

The second and third order intensity terms are explicitly computed. The method used suggests a general approach for computing any order. It is found that in general the first order scattering intensity pattern (or phase function) peaks in the forward direction Θ = 0. The second order tends to smooth out the pattern giving a maximum in the Θ = π/2 direction and minima in the Θ = 0 , Θ = π directions. This ceases to be true if ka (where k = 2π/λ) becomes large (> 20). For large ka the forward direction is further enhanced. Similar features are expected from the higher orders even though the critical value of ka may increase with the order.

The first order polarization of the scattered wave is determined. The ensemble average of the Stokes parameters of the scattered wave is explicitly computed for the second order. A similar method can be applied for any order. It is found that the polarization of the scattered wave depends on the polarization of the incident wave. If the latter is elliptically polarized then the first order scattered wave is elliptically polarized, but in the Θ = π/2 direction is linearly polarized. If the incident wave is circularly polarized the first order scattered wave is elliptically polarized except for the directions Θ = π/2 (linearly polarized) and Θ = 0, π (circularly polarized). The handedness of the Θ = 0 wave is the same as that of the incident whereas the handedness of the Θ = π wave is opposite. If the incident wave is linearly polarized the first order scattered wave is also linearly polarized. The second order makes the total scattered wave to be elliptically polarized for any Θ no matter what the incident wave is. However, the handedness of the total scattered wave is not altered by the second order. Higher orders have similar effects as the second order.

If the medium is lossy the general approach employed for the lossless case is still valid. Only the algebra increases in complexity. It is found that the results of the lossless case are insensitive in the first order of kimD where kim = imaginary part of the wave vector k and D a linear characteristic dimension of the region occupied by the particles. Thus moderately extended regions and small losses make (kimD)2 ≪ 1 and the lossy character of the medium does not alter the results of the lossless case. In general the presence of the losses tends to reduce the forward scattering.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Many of British rivers hold stocks of salmon (Salmo salar L.) and sea trout (Salmo trutta L.) and during most of the year some of the adult fish migrate upstream to the head waters where, with the advent of winter, they will eventually spawn. For a variety of reasons, including the generation of power for milling, improving navigation and measuring water flow, man has put obstacles in the way of migratory fish which have added to those already provided by nature in the shape of rapids and waterfalls. While both salmon and sea trout, particularly the former, are capable of spectacular leaps the movement of fish over man-made and natural obstacles can be helped, or even made possible, by the judicious use of fish passes. These are designed to give the fish an easier route over or round an obstacle by allowing it to overcome the water head difference in a series of stages ('pool and traverse' fish pass) or by reducing the water velocity in a sloping channel (Denil fish pass). Salmon and sea trout make their spawning runs at different flow conditions, salmon preferring much higher water flows than sea trout. Hence the design of fish passes requires an understanding of the swimming ability of fish (speed and endurance) and the effect of water temperature on this ability. Also the unique features of each site must be appreciated to enable the pass to be positioned so that its entrance is readily located. As well as salmon and sea trout, rivers often have stocks of coarse fish and eels. Coarse fish migrations are generally local in character and although some obstructions such as weirs may allow downstream passages only, they do not cause a significant problem. Eels, like salmon and sea trout, travel both up and down river during the course of their life histories. However, the climbing power of elvers is legendary and it is not normally necessary to offer them help, while adult silver eels migrate at times of high water flow when downstream movement is comparatively easy: for these reasons neither coarse fish nor eels are considered further. The provision of fish passes is, in many instances, mandatory under the Salmon and Freshwater Fisheries Act 1975. This report is intended for those involved in the planning, siting, construction and operation of fish passes and is written to clarify the hydraulic problems for the biologist and the biological problems for the engineer. It is also intended to explain the criteria by which the design of an individual pass is assessed for Ministerial Approval.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Simulated annealing is a popular method for approaching the solution of a global optimization problem. Existing results on its performance apply to discrete combinatorial optimization where the optimization variables can assume only a finite set of possible values. We introduce a new general formulation of simulated annealing which allows one to guarantee finite-time performance in the optimization of functions of continuous variables. The results hold universally for any optimization problem on a bounded domain and establish a connection between simulated annealing and up-to-date theory of convergence of Markov chain Monte Carlo methods on continuous domains. This work is inspired by the concept of finite-time learning with known accuracy and confidence developed in statistical learning theory.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

When used correctly, Statistical Energy Analysis (SEA) can provide good predictions of high frequency vibration levels in built-up structures. Unfortunately, the assumptions that underlie SEA break down as the frequency of excitation is reduced, and the method does not yield accurate predictions at "medium" frequencies (and neither does the Finite Element Method, which is limited to low frequencies). A basic problem is that parts of the system have a short wavelength of deformation and meet the requirements of SEA, while other parts of the system do not - this is often referred to as the "mid-frequency" problem, and there is a broad class of mid-frequency vibration problems that are of great concern to industry. In this paper, a coupled deterministic-statistical approach referred to as the Hybrid Method (Shorter & Langley, 2004) is briefly described, and some results that demonstrate how the method overcomes the aforementioned difficulties are presented.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Characterization of Platinum Group Elements (PGE) has been applied to earth, space and environmental sciences. However, all these applications are based on a basic prerequisite, i.e. their concentration or ratio in the research objects can be accurately and precisely determined. In fact, development in these related studies is a great challenge to the analytical chemistry of the PGE because their content in the geological sample (non-mineralized) is often extremely low, range from ppt (10~(-12)g/g) to ppt (10~(-9)g/g). Their distribution is highly heterogeneous, usually concentrating in single particle or phase. Therefore, the accurate determination of these elements remains a problem in analytical chemistry and it obstructs the research on geochemistry of PGE. A great effort has been made in scientific community to reliable determining of very low amounts of PGE, which has been focused on to reduce the level of background in used reagents and to solve probable heterogeneity of PGE in samples. Undoubtedly, the fire-assay method is one of the best ways for solving the heterogeneity, as a large amount of sample weight (10-50g) can be hold. This page is mainly aimed at development of the methodology on separation, concentration and determination of the ultra-trace PGE in the rock and peat samples, and then they are applied to study the trace of PGE in ophiolite suite, in Kudi, West Kunlun and Tunguska explosion in 1908. The achievements of the study are summarized as follows: 1. A PGE lab is established in the Laboratory of Lithosphere Tectonic Evolution, IGG, CAS. 2. A modified method of determination of PGE in geological samples using NiS Fire-Assay with inductively coupled plasma-mass spectrometry (ICP-MS) is set up. The technical improvements are made as following: (1) investigating the level of background in used reagents, and finding the contents of Au, Pt and Pd in carbonyl nickel powder are 30, 0.6 and 0.6ng/g, respectively and 0.35, 7.5 and 6.4ng, respectively in other flux, and the contents of Ru, Rh, Os in whole reagents used are very low (below or near the detection limits of ICP-MS); (2) measuring the recoveries of PGE using different collector (Ni+S) and finding 1.5g of carbonyl nickel is effective for recovering the PGE for 15g samples (recoveries are more than 90%), reducing the inherent blank value due to impurities reagents; (3) direct dissolving nickel button in Teflon bomb and using Te-precipitation, so reducing the loss of PGE during preconcentration process and improving the recoveries of PGE (above 60% for Os and 93.6-106.3% for other PGE, using 2g carbonyl nickel); (4) simplifying the procedure of analyzing Osmium; (5)method detection limits are 8.6, 4.8, 43, 2.4, 82pg/g for 15g sample size ofRu, Rh, Pd, Ir, Pt, respectively. 3. An analytical method is set up to determine the content of ultra-trace PGE in peat samples. The method detection limits are 0.06, 0.1, 0.001, 0.001 and 0.002ng/mL for Ru, Rh, Pd, Ir and Pt, respectively. 4. Distinct anomaly of Pd and Os are firstly found in the peat sampling near the Tunguska explosion site, using the analytical method. 5. Applying the method to the study on the origin of Tunguska explosion and making the following conclusions: (1) these excess elements were likely resulted from the Tunguska Cosmic Body (TCB) explosion of 1908. (2) The Tunguska explosive body was composed of materials (solid components) similar to C1 chondrite, and, most probably, a cometary object, which weighed more than 10~7 tons and had a radius of more than 126 m. 6. The analysis method about ultra-trace PGE in rock samples is successfully used in the study on the characteristic of PGE in Kudi ophiolite suite and the following conclusions are made: (1) The difference of the mantle normalization of PGE patterns between dunite, harzburgite and lherzolite in Kudi indicates that they are residual of multi-stage partial melt of the mantle. Their depletion of Ir at a similar degree probably indicates the existence of an upper mantle depleted Ir. (2) With the evolution of the magma produced by the partial melt of the mantle, strong differentiation has been shown between IPGE and PPGE; and the differentiation from pyroxenite to basalt would have been more and more distinct. (3) The magma forming ophiolite in Kudi probably suffered S-saturation process.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Transfer of learning is one of the major concepts in educational psychology. As cognitive psychology develops, many researchers have found that transfer plays an important part in problem solving, and the awareness of the similarity of related problems is important in transfer. So they become more interested in researching the problem of transfer. But in the literature of transfer research, it has been found that many researchers do not hold identical conclusions about the influence of awareness of related problems during problem solving transfer. This dissertation is written on the basic of much of sub-research work, such as looking up literature concerning transfer of problem solving research, comparing the results of research work done recently and experimental researches. The author of this dissertation takes middle school students as subjects, geometry as materials, and adopts factorial design in his experiments. The influence of awareness of related problems on problem solving transfer is examined from three dimensions which are the degree of difficulty of transfer problems, the level of awareness of related problems and the characteristics of subjects themselves. Five conclusions have been made after the experimental research: (1) During the process of geometry problem solving, the level of awareness of related problems is one of the major factors that influence the effect of problem solving transfer. (2) Either more difficult or more easy of the transfer problems will hinder the influence of awareness of related problems during problem solving transfer, and the degree of difficulty of the transfer problems have interactions with the level of awareness of related problems in affecting transfer. (3) During geometry problems solving transfer, the level of awareness of related problems has interactions with the degree of student achievement. Compared with the students who have lower achievement, the influence of the level of the awareness is bigger in the students who have higher achievement. (4) There is positive correlation between geometry achievement and reasoning ability of the middle school students. The student who has higher reasoning ability has higher geometry achievement, while the level of awareness is raised, the transfer achievement of both can be raised significantly. (5) There is positive correlation between geometry achievement and cognitive style of the middle school students. The student who has independent field tendency of cognitive style has higher geometry achievement, while the level of awareness is raised, the transfer achievement of both can be raised significantly. At the end of the dissertation, the researcher offers two proposals concerning Geometry teaching on the basis of the research findings.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Thurston, L. (2004). James Joyce and the Problem of Psychoanalysis. Cambridge: Cambridge University Press. RAE2008

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In many networked applications, independent caching agents cooperate by servicing each other's miss streams, without revealing the operational details of the caching mechanisms they employ. Inference of such details could be instrumental for many other processes. For example, it could be used for optimized forwarding (or routing) of one's own miss stream (or content) to available proxy caches, or for making cache-aware resource management decisions. In this paper, we introduce the Cache Inference Problem (CIP) as that of inferring the characteristics of a caching agent, given the miss stream of that agent. While CIP is insolvable in its most general form, there are special cases of practical importance in which it is, including when the request stream follows an Independent Reference Model (IRM) with generalized power-law (GPL) demand distribution. To that end, we design two basic "litmus" tests that are able to detect LFU and LRU replacement policies, the effective size of the cache and of the object universe, and the skewness of the GPL demand for objects. Using extensive experiments under synthetic as well as real traces, we show that our methods infer such characteristics accurately and quite efficiently, and that they remain robust even when the IRM/GPL assumptions do not hold, and even when the underlying replacement policies are not "pure" LFU or LRU. We exemplify the value of our inference framework by considering example applications.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We consider a problem of scheduling jobs on m parallel machines. The machines are dedicated, i.e., for each job the processing machine is known in advance. We mainly concentrate on the model in which at any time there is one unit of an additional resource. Any job may be assigned the resource and this reduces its processing time. A job that is given the resource uses it at each time of its processing. No two jobs are allowed to use the resource simultaneously. The objective is to minimize the makespan. We prove that the two-machine problem is NP-hard in the ordinary sense, describe a pseudopolynomial dynamic programming algorithm and convert it into an FPTAS. For the problem with an arbitrary number of machines we present an algorithm with a worst-case ratio close to 3/2, and close to 3, if a job can be given several units of the resource. For the problem with a fixed number of machines we give a PTAS. Virtually all algorithms rely on a certain variant of the linear knapsack problem (maximization, minimization, multiple-choice, bicriteria). © 2008 Wiley Periodicals, Inc. Naval Research Logistics, 2008

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Following recognition of effects in the 1980s, tributyltin (TBT) has been monitored at sites in the English Channel to evaluate the prognosis for biota – spanning the introduction of restrictions on TBT use on small boats and the recent phase-out on the global fleet. We describe how persistence and impact of TBT in clams Scrobicularia plana has changed during this period in Southampton Water and Poole Harbour. TBT contamination (and loss) in water, sediment and clams reflects the abundance and type of vessel activity: half-times in sediment (up to 8y in Poole, 33y in Southampton) are longest near commercial shipping. Recovery of clam populations – slowest in TBT-contaminated deposits – provides a useful biological measure of legislative efficacy in estuaries. On rocky shores, recovery from imposex in Nucella lapillus is evident at many sites but, near ports, is prolonged by shipping impacts, including sediment legacy, for example, in the Fal.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A standard problem within universities is that of teaching space allocation which can be thought of as the assignment of rooms and times to various teaching activities. The focus is usually on courses that are expected to fit into one room. However, it can also happen that the course will need to be broken up, or ‘split’, into multiple sections. A lecture might be too large to fit into any one room. Another common example is that of seminars or tutorials. Although hundreds of students may be enrolled on a course, it is often subdivided into particular types and sizes of events dependent on the pedagogic requirements of that particular course. Typically, decisions as to how to split courses need to be made within the context of limited space requirements. Institutions do not have an unlimited number of teaching rooms, and need to effectively use those that they do have. The efficiency of space usage is usually measured by the overall ‘utilisation’ which is basically the fraction of the available seat-hours that are actually used. A multi-objective optimisation problem naturally arises; with a trade-off between satisfying preferences on splitting, a desire to increase utilisation, and also to satisfy other constraints such as those based on event location and timetabling conflicts. In this paper, we explore such trade-offs. The explorations themselves are based on a local search method that attempts to optimise the space utilisation by means of a ‘dynamic splitting’ strategy. The local moves are designed to improve utilisation and satisfy the other constraints, but are also allowed to split, and un-split, courses so as to simultaneously meet the splitting objectives.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

ANPO (A Non-predefined Outcome) is an an art-making methodology that employs structuralist theory of language (Saussure, Lacan, Foucault) combined with Hegel’s dialectic and the theory of creation of space by Lefebvre to generate spaces of dialogue and conversation between community members and different stakeholders. These theories of language are used to find artistic ways of representing a topic that community members have previously chosen. The topic is approached in a way that allows a visual, aural, performative and gustative form. To achieve this, the methodology is split in four main steps: step 1 ‘This is not a chair’, Step 2 ‘The topic’, Step 3 ‘ Vis-á-vis-á-vis’ and step 4. ‘Dialectical representation’ where the defined topic is used to generate artistic representations.The step 1 is a warm up exercise informed by the Rene Magritte painting ‘This is not a Pipe’. This exercise aims to help the participants to see an object as something else than an object but as a consequence of social implications. Step 2, participants choose a random topic and vote for it. The artist/facilitator does not predetermine the topic, participants are the one who propose it and choose it. Step 3, will be analysed in this publication and finally step 4, the broken down topic is taken to be represented and analysed in different ways. 

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The Perils of Moviegoing in America is a film history that examines the various physical and (perceived) moral dangers facing audiences during the first fifty years of film exhibition.

Chapter 1: “Conflagration”
As early as 1897, a major fire broke out at a film exhibition in San Francisco, with flames burning the projectionist and nearby audience members. From that point until the widespread adoption of safety stock in 1950, fires were a very common movie-going experience. Hundreds of audience members lost their lives in literally thousands of theatre fires, ranging from early nickelodeons to the movie palaces of the thirties and forties.

Chapter 2: “Thieves Among Us”
Bandits robbed movie theatres on hundreds of occasions from the early days of film exhibition through the end of the Great Depression. They held up ticket booths, and they dynamited theatre safes. They also shot theatre managers, ushers, and audience members, as a great many of the robberies occurred while movies were playing on the screens inside.

Chapter 3: “Bombs Away”
Bombings at movie theatres became common in small towns and large cities on literally hundreds of occasions from 1914 to the start of World War II. Some were incendiary bombs, and some were stench bombs; both could be fatal, whether due to explosions or to the trampling of panicked moviegoers

Chapter 4: “It’s Catching”
Widespread movie-going in the early 20th century provoked an outcry from numerous doctors and optometrists who believed that viewing films could do irreparable harm to the vision of audience members. Medical publications (including the Journal of the American Medical Association) published major studies on this perceived problem, which then filtered into popular-audience magazines and newspapers.

Chapter 5: “The Devil’s Apothecary Shops”
Sitting in the dark with complete strangers proved worrisome for many early filmgoers, who had good reason to be concerned. Darkness meant that prostitutes could easily work in the balconies of some movie theatres, as could “mashers” who molested female patrons (and sometimes children) after the lights were dimmed. That was all in addition to the various murderers who used the cover of darkness to commit their crimes at movie theatres.

Chapter 6: “Blue Sundays”
Blue laws were those regulations that prohibited businesses from operating on Sundays. Most communities across the US had such legislation on their books, which by the nickelodeon era were at odds with the thousands of filmgoers who went to the movies every Sunday. Theatre managers were often arrested, making newspaper headlines over and over again. Police sometimes even arrested entire film audiences as accomplices in the Blue Law violations.

Chapter 7: “Something for Nothing”
In an effort to bolster ticket sales, many movie theatres in the 1910s began to hold lotteries in which lucky audience members won cash prizes; by the time of the Great Depression, lotteries like “Bank Night” became a common aspect of the theatre-going enterprise. However, reception studies have generally overlooked the intense (and sometimes coordinated) efforts by police, politicians, and preachers to end this practice, which they viewed as illegal and immoral gambling.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Electing a leader is a fundamental task in distributed computing. In its implicit version, only the leader must know who is the elected leader. This paper focuses on studying the message and time complexity of randomized implicit leader election in synchronous distributed networks. Surprisingly, the most "obvious" complexity bounds have not been proven for randomized algorithms. The "obvious" lower bounds of O(m) messages (m is the number of edges in the network) and O(D) time (D is the network diameter) are non-trivial to show for randomized (Monte Carlo) algorithms. (Recent results that show that even O(n) (n is the number of nodes in the network) is not a lower bound on the messages in complete networks, make the above bounds somewhat less obvious). To the best of our knowledge, these basic lower bounds have not been established even for deterministic algorithms (except for the limited case of comparison algorithms, where it was also required that some nodes may not wake up spontaneously, and that D and n were not known).

We establish these fundamental lower bounds in this paper for the general case, even for randomized Monte Carlo algorithms. Our lower bounds are universal in the sense that they hold for all universal algorithms (such algorithms should work for all graphs), apply to every D, m, and n, and hold even if D, m, and n are known, all the nodes wake up simultaneously, and the algorithms can make anyuse of node's identities. To show that these bounds are tight, we present an O(m) messages algorithm. An O(D) time algorithm is known. A slight adaptation of our lower bound technique gives rise to an O(m) message lower bound for randomized broadcast algorithms.

An interesting fundamental problem is whether both upper bounds (messages and time) can be reached simultaneously in the randomized setting for all graphs. (The answer is known to be negative in the deterministic setting). We answer this problem partially by presenting a randomized algorithm that matches both complexities in some cases. This already separates (for some cases) randomized algorithms from deterministic ones. As first steps towards the general case, we present several universal leader election algorithms with bounds that trade-off messages versus time. We view our results as a step towards understanding the complexity of universal leader election in distributed networks.