869 resultados para Hyperbolic Boundary-Value Problem
Resumo:
The paper considers the open shop scheduling problem to minimize the make-span, provided that one of the machines has to process the jobs according to a given sequence. We show that in the preemptive case the problem is polynomially solvable for an arbitrary number of machines. If preemption is not allowed, the problem is NP-hard in the strong sense if the number of machines is variable, and is NP-hard in the ordinary sense in the case of two machines. For the latter case we give a heuristic algorithm that runs in linear time and produces a schedule with the makespan that is at most 5/4 times the optimal value. We also show that the two-machine problem in the nonpreemptive case is solvable in pseudopolynomial time by a dynamic programming algorithm, and that the algorithm can be converted into a fully polynomial approximation scheme. © 1998 John Wiley & Sons, Inc. Naval Research Logistics 45: 705–731, 1998
Resumo:
This paper considers the problem of processing n jobs in a two-machine non-preemptive open shop to minimize the makespan, i.e., the maximum completion time. One of the machines is assumed to be non-bottleneck. It is shown that, unlike its flow shop counterpart, the problem is NP-hard in the ordinary sense. On the other hand, the problem is shown to be solvable by a dynamic programming algorithm that requires pseudopolynomial time. The latter algorithm can be converted into a fully polynomial approximation scheme that runs in time. An O(n log n) approximation algorithm is also designed whi finds a schedule with makespan at most 5/4 times the optimal value, and this bound is tight.
Resumo:
In this paper, we consider the problem of providing flexibility to solutions of two-machine shop scheduling problems. We use the concept of group-scheduling to characterize a whole set of schedules so as to provide more choice to the decision-maker at any decision point. A group-schedule is a sequence of groups of permutable operations defined on each machine where each group is such that any permutation of the operations inside the group leads to a feasible schedule. Flexibility of a solution and its makespan are often conflicting, thus we search for a compromise between a low number of groups and a small value of makespan. We resolve the complexity status of the relevant problems for the two-machine flow shop, job shop and open shop. A number of approximation algorithms are developed and their worst-case performance is analyzed. For the flow shop, an effective heuristic algorithm is proposed and the results of computational experiments are reported.
Resumo:
This paper presents a simple approach to the so-called frame problem based on some ordinary set operations, which does not require non-monotonic reasoning. Following the notion of the situation calculus, we shall represent a state of the world as a set of fluents, where a fluent is simply a Boolean-valued property whose truth-value is dependent on the time. High-level causal laws are characterised in terms of relationships between actions and the involved world states. An effect completion axiom is imposed on each causal law, which guarantees that all the fluents that can be affected by the performance of the corresponding action are always totally governed. It is shown that, compared with other techniques, such a set operation based approach provides a simpler and more effective treatment to the frame problem.
Resumo:
The solution process for diffusion problems usually involves the time development separately from the space solution. A finite difference algorithm in time requires a sequential time development in which all previous values must be determined prior to the current value. The Stehfest Laplace transform algorithm, however, allows time solutions without the knowledge of prior values. It is of interest to be able to develop a time-domain decomposition suitable for implementation in a parallel environment. One such possibility is to use the Laplace transform to develop coarse-grained solutions which act as the initial values for a set of fine-grained solutions. The independence of the Laplace transform solutions means that we do indeed have a time-domain decomposition process. Any suitable time solver can be used for the fine-grained solution. To illustrate the technique we shall use an Euler solver in time together with the dual reciprocity boundary element method for the space solution
Resumo:
The Interact System Model (ISM) developed by Fisher and Hawes (1971) for the analysis of face-to-face communication during small-group problem solving activities was used to study online communication. This tool proved to be of value in the analysis, but the conversation patterns reported by Fisher (1980) did not fully appear in the online environment. Participants displayed a habit of "being too polite" and not fully voicing their disagreements with ideas posed by others. Thus progress towards task completion was slow and incomplete.
Resumo:
This paper starts off asking whether a strictly political approach may be deduced based on Martin Heidegger’ ontological analyses of modernity. His interpretation of the Greek phenomenon of the polis is discussed along with the distinction established therein between this form of community and the modern state, founded according to Heidegger on the metaphysical essence of modernity. To clarify this question regard is had to the proclamation of values observed by Heidegger in the different forms of state organization arising in the age of technical consummation of metaphysics. In this connection, his vision of nihilism is studied and a hypothesis is finally offered as to the form of state that would be consistent with a renunciation of the values required, in his view, by the manifestation of the entity in modernity as a wholly producible object.
Resumo:
This paper describes the development of a novel metaheuristic that combines an electromagnetic-like mechanism (EM) and the great deluge algorithm (GD) for the University course timetabling problem. This well-known timetabling problem assigns lectures to specific numbers of timeslots and rooms maximizing the overall quality of the timetable while taking various constraints into account. EM is a population-based stochastic global optimization algorithm that is based on the theory of physics, simulating attraction and repulsion of sample points in moving toward optimality. GD is a local search procedure that allows worse solutions to be accepted based on some given upper boundary or ‘level’. In this paper, the dynamic force calculated from the attraction-repulsion mechanism is used as a decreasing rate to update the ‘level’ within the search process. The proposed method has been applied to a range of benchmark university course timetabling test problems from the literature. Moreover, the viability of the method has been tested by comparing its results with other reported results from the literature, demonstrating that the method is able to produce improved solutions to those currently published. We believe this is due to the combination of both approaches and the ability of the resultant algorithm to converge all solutions at every search process.
Resumo:
The diffusion-controlled response and recovery behaviour of a naked optical film sensor (i.e., with no protective membrane) with a hyperbolic-type response [i.e., S0/S = (1 + Kc), where S is the measured value of the absorbance or luminescence intensity of one form of the sensor dye in the presence of the analyte, S0 is the observed value of S in the absence of analyte and K is a constant] to changes in analyte concentration, c, in a system under test is approximated using a simple model, and described more accurately using a numerical model; in both models it is assumed that the system under test represents an infinite reservoir. Each model predicts the variations in the response and recovery times of such an optical sensor, as a function of the final external analyte concentration, the film thickness (I) and the analyte diffusion coefficient (D). From an observed signal versus time profile for a naked optical film sensor it is shown how values for K and D/I2 can be extracted using the numerical model. Both models provide a qualitative description of the often cited asymmetric nature of the response and recovery for hyperbolic-type response naked optical film sensors. It is envisaged that the models will help in the interpretation of the response and recovery behaviour exhibited by many naked optical film sensors and might be especially apposite when the analyte is a gas.
Resumo:
Biodiversity may be seen as a scientific measure of the complexity of a biological system, implying an information basis. Complexity cannot be directly valued, so economists have tried to define the services it provides, though often just valuing the services of 'key' species. Here we provide a new definition of biodiversity as a measure of functional information, arguing that complexity embodies meaningful information as Gregory Bateson defined it. We argue that functional information content (FIC) is the potentially valuable component of total (algorithmic) information content (AIC), as it alone determines biological fitness and supports ecosystem services. Inspired by recent extensions to the Noah's Ark problem, we show how FIC/AIC can be calculated to measure the degree of substitutability within an ecological community. Establishing substitutability is an essential foundation for valuation. From it, we derive a way to rank whole communities by Indirect Use Value, through quantifying the relation between system complexity and the production rate of ecosystem services. Understanding biodiversity as information evidently serves as a practical interface between economics and ecological science. © 2012 Elsevier B.V.
Resumo:
In this paper we seek to show how marketing activities inscribe value on business model innovation, representative of an act, or sequence of socially interconnecting acts. Theoretically we ask two interlinked questions: (1) how can value inscriptions contribute to business model innovations? (2) how can marketing activities support the inscription of value on business model innovations? Semi-structured in-depth interviews were conducted with the thirty-seven members from across four industrial projects commercializing disruptive digital innovations. Various individuals from a diverse range of firms are shown to cast relevant components of their agency and knowledge on business model innovations through negotiation as an ongoing social process. Value inscription is mutually constituted from the marketing activities, interactions and negotiations of multiple project members across firms and functions to counter destabilizing forces and tensions arising from the commercialization of disruptive digital innovations. This contributes to recent conceptual thinking in the industrial marketing literature, which views business models as situated within dynamic business networks and a context-led evolutionary process. A contribution is also made to debate in the marketing literature around marketing's boundary-spanning role, with marketing activities shown to span and navigate across functions and firms in supporting value inscriptions on business model innovations.
Resumo:
Background: Although research has shown that significant burden and adverse psychological impact are associated with caring for a child with brain injury, limited knowledge exists concerning the qualitative experience and impact of this burden.
Objective: To provide an account of the experiences of mothers who care for a childhood survivor of brain injury.
Research design: Postal survey.
Methods and procedures: A self-report questionnaire was sent to a consecutive sample of mothers (n=86) of children (aged 8-28) with acquired brain injury, registered with a UK children’s brain injury charity. Five essay style questions enabled mothers to reflect on and describe at length their caring experiences, with particular emphasis placed on the perceived impact on emotional well-being.
Main outcomes and results: Thematic analysis identified five key themes: Perpetually Anxious, The Guilty Carer, The Labour of Caring, A Self-Conscious Apologist and Perpetually Grieving. Collectively, these themes highlight two core processes shaping mothers’ caring experiences and concomitant mental well-being. Firstly, the collective and enduring nature of caregiver burden over time. Second, the crucial role played by socio-cultural values in perpetuating caregiver burden.
Conclusions: Societal norms, particularly those relating to the nature and outcome of brain injury and motherhood, serve to marginalise mothers and increase feelings of isolation. Findings suggest the value of peer support programs as an effective means of providing appropriate social support.
Resumo:
In linear cascade wind tunnel tests, a high level of pitchwise periodicity is desirable to reproduce the azimuthal periodicity in the stage of an axial compressor or turbine. Transonic tests in a cascade wind tunnel with open jet boundaries have been shown to suffer from spurious waves, reflected at the jet boundary, that compromise the flow periodicity in pitch. This problem can be tackled by placing at this boundary a slotted tailboard with a specific wall void ratio s and pitch angle a. The optimal value of the s-a pair depends on the test section geometry and on the tunnel running conditions. An inviscid two-dimensional numerical method has been developed to predict transonic linear cascade flows, with and without a tailboard, and quantify the nonperiodicity in the discharge. This method includes a new computational boundary condition to model the effects of the tailboard slots on the cascade interior flow. This method has been applied to a six-blade turbine nozzle cascade, transonically tested at the University of Leicester. The numerical results identified a specific slotted tailboard geometry, able to minimize the spurious reflected waves and regain some pitchwise flow periodicity. The wind tunnel open jet test section was redesigned accordingly. Pressure measurements at the cascade outlet and synchronous spark schlieren visualization of the test section, with and without the optimized slotted tailboard, have confirmed the gain in pitchwise periodicity predicted by the numerical model. Copyright © 2006 by ASME.
Resumo:
This study uses a discrete choice experiment (DCE) to elicit willingness to pay estimates for changes in the water quality of three rivers. As many regions the metropolitan region Berlin-Brandenburg struggles to achieve the objectives of the Water Framework Directive until 2015. A major problem is the high load of nutrients. As the region is part of two states (Länder) and the river sections are common throughout the whole region we account for the spatial context twofold. Firstly, we incorporate the distance between each respondent and all river stretches in all MNL and RPL models, and, secondly, we consider whether respondents reside in the state of Berlin or Brandenburg. The compensating variation (CV) calculated for various scenarios shows that overall people would significantly benefit from improved water quality. The CV measures, however, also reveal that not considering the spatial context would result in severely biased welfare measures. While the distance decay effect lowers CV, state residency is connected to the frequency of status quo choices and not accounting for residency would underestimate possible welfare gains in one state. Another finding is that the extent of the market varies with respect to attributes (river stretches) and attribute levels (water quality levels).
Resumo:
We have excited mid-infrared surface plasmons in two YBCO thin films of contrasting properties using attenuated total reflection of light and found that the imaginary part of the dielectric function decreases linearly with reduction in temperature. This result is in contrast with the commonly reported conclusion of infrared normal reflectance studies. If sustained it may clarify the problem of understanding the normal state properties of YBCO and the other cuprates. The dielectric function of the films, epsilon = epsilon(1) + i epsilon(2), was determined between room temperature and 80K: epsilon(1) was found to be only slightly temperature dependent but somewhat sample dependent, probably as a result of surface and grain boundary contamination. The imaginary part, epsilon(2), (and the real part of the conductivity, sigma(1),) decreased linearly with reduction in temperature in both films. Results obtained were: for film 1: epsilon(1) = - 14.05 - 0.0024T and epsilon(2) - 4.11 + 0.086T and for film 2: epsilon(1) = - 24.09 + 0.0013T and epsilon(2) = 7.66 + 0.067T where T is the temperature in Kelvin. An understanding of the results is offered in terms of temperature-dependent intrinsic intragrain inelastic scattering and temperature-independent contributions: elastic and inelastic grain boundary scattering and optical interband (or localised charge) absorption. The relative contribution of each is estimated. A key conclusion is that the interband (or localised charge) absorption is only similar to 10%. Most importantly, the intrinsic scattering rate, 1/tau, decreases linearly with fall in temperature, T, in a regime where current theory predicts dependence on frequency, omega, to dominate. The coupling constant, lambda, between the charge carriers and the thermal excitations has a value of 1.7, some fivefold greater than the far infrared value. These results imply a need to restate the phenomenology of the normal state of high temperature superconductors and seek a corresponding theoretical understanding.