962 resultados para Discrete boundary value problems
Resumo:
We propose and investigate an application of the method of fundamental solutions (MFS) to the radially symmetric and axisymmetric backward heat conduction problem (BHCP) in a solid or hollow cylinder. In the BHCP, the initial temperature is to be determined from the temperature measurements at a later time. This is an inverse and ill-posed problem, and we employ and generalize the MFS regularization approach [B.T. Johansson and D. Lesnic, A method of fundamental solutions for transient heat conduction, Eng. Anal. Boundary Elements 32 (2008), pp. 697–703] for the time-dependent heat equation to obtain a stable and accurate numerical approximation with small computational cost.
Resumo:
Purpose: The purpose of this paper is to investigate the possibilities and problems for collaboration in the area of corporate social responsibility (CSR) and sustainability. The paper explores the nature and concept of collaboration and its forms, and critically evaluates the potential contribution a collaborative approach between agencies might offer to these agendas. Design/methodology/approach: The paper explores different forms of research on collaboration, together with a UK Government report on collaboration, to evaluate how the issue is addressed in theory and practice. Findings: Sustainable development creates extensive challenges for a wide range of agencies, including governments, non-governmental organizations, businesses and civil society. It is unlikely, however, that solutions will be found in any one quarter. Collaboration between agencies in some form would seem a logical step in supporting measures towards a more responsible and environmentally sustainable global economy. Originality/value: The paper offers new insights into developing a research and praxis agenda for collaborative possibilities towards the advancement of CSR and sustainability. © Emerald Group Publishing Limited.
Resumo:
In this paper shortest path games are considered. The transportation of a good in a network has costs and benet too. The problem is to divide the prot of the transportation among the players. Fragnelli et al (2000) introduce the class of shortest path games, which coincides with the class of monotone games. They also give a characterization of the Shapley value on this class of games. In this paper we consider further four characterizations of the Shapley value (Shapley (1953)'s, Young (1985)'s, Chun (1989)'s, and van den Brink (2001)'s axiomatizations), and conclude that all the mentioned axiomatizations are valid for shortest path games. Fragnelli et al (2000)'s axioms are based on the graph behind the problem, in this paper we do not consider graph specic axioms, we take TU axioms only, that is, we consider all shortest path problems and we take the view of abstract decision maker who focuses rather on the abstract problem than on the concrete situations.
Resumo:
In this paper cost sharing problems are considered. We focus on problems given by rooted trees, we call these problems cost-tree problems, and on the induced transferable utility cooperative games, called irrigation games. A formal notion of irrigation games is introduced, and the characterization of the class of these games is provided. The well-known class of airport games Littlechild and Thompson (1977) is a subclass of irrigation games. The Shapley value Shapley (1953) is probably the most popular solution concept for transferable utility cooperative games. Dubey (1982) and Moulin and Shenker (1992) show respectively, that Shapley's Shapley (1953) and Young (1985)'s axiomatizations of the Shapley value are valid on the class of airport games. In this paper we show that Dubey (1982)'s and Moulin and Shenker (1992)'s results can be proved by applying Shapley (1953)'s and Young (1985)'s proofs, that is those results are direct consequences of Shapley (1953)'s and Young (1985)'s results. Furthermore, we extend Dubey (1982)'s and Moulin and Shenker (1992)'s results to the class of irrigation games, that is we provide two characterizations of the Shapley value for cost sharing problems given by rooted trees. We also note that for irrigation games the Shapley value is always stable, that is it is always in the core Gillies (1959).
Resumo:
In this paper shortest path games are considered. The transportation of a good in a network has costs and benet too. The problem is to divide the prot of the transportation among the players. Fragnelli et al (2000) introduce the class of shortest path games, which coincides with the class of monotone games. They also give a characterization of the Shapley value on this class of games. In this paper we consider further four characterizations of the Shapley value (Shapley (1953)'s, Young (1985)'s, Chun (1989)'s, and van den Brink (2001)'s axiomatizations), and conclude that all the mentioned axiomatizations are valid for shortest path games. Fragnelli et al (2000)'s axioms are based on the graph behind the problem, in this paper we do not consider graph specic axioms, we take TU axioms only, that is, we consider all shortest path problems and we take the view of abstract decision maker who focuses rather on the abstract problem than on the concrete situations.
Resumo:
In 2010, a household survey was carried out in Hungary among 1037 respondents to study consumer preferences and willingness to pay for health care services. In this paper, we use the data from the discrete choice experiments included in the survey, to elicit the preferences of health care consumers about the choice of health care providers. Regression analysis is used to estimate the effect of the improvement of service attributes (quality, access, and price) on patients’ choice, as well as the differences among the socio-demographic groups. We also estimate the marginal willingness to pay for the improvement in attribute levels by calculating marginal rates of substitution. The results show that respondents from a village or the capital, with low education and bad health status are more driven by the changes in the price attribute when choosing between health care providers. Respondents value the good skills and reputation of the physician and the attitude of the personnel most, followed by modern equipment and maintenance of the office/hospital. Access attributes (travelling and waiting time) are less important. The method of discrete choice experiment is useful to reveal patients’ preferences, and might support the development of an evidence-based and sustainable health policy on patient payments.
Resumo:
The present article assesses agency theory related problems contributing to the fall of shopping centers. The negative effects of the financial and economic downturn started in 2008 were accentuated in emerging markets like Romania. Several shopping centers were closed or sold through bankruptcy proceedings or forced execution. These failed shopping centers, 10 in number, were selected in order to assess agency theory problems contributing to the failure of shopping centers; as research method qualitative multiple cases-studies is used. Results suggest, that in all of the cases the risk adverse behavior of the External Investor- Principal, lead to risk sharing problems and subsequently to the fall of the shopping centers. In some of the cases Moral Hazard (lack of Developer-Agent’s know-how and experience) as well as Adverse Selection problems could be identified. The novelty of the topic for the shopping center industry and the empirical evidences confer a significant academic and practical value to the present article.
Resumo:
A dolgozatban a legegyszerűbb kérdést feszegetjük: Hogyan kell az árakat meghatározni véletlen jövőbeli kifizetések esetén. A tárgyalás némiképpen absztrakt, de a funkcionálanalízis néhány közismert tételén kívül semmilyen más mélyebb matematikai területre nem kell hivatkozni. A dolgozat kérdése, hogy miként indokolható a várható jelenérték szabálya, vagyis hogy minden jövőbeli kifizetés jelen időpontban érvényes ára a jövőbeli kifizetés diszkontált várható értéke. A dologban az egyetlen csavar az, hogy a várható értékhez tartozó valószínűségi mértékről nem tudunk semmit. Csak annyit tudunk, hogy létezik a matematikai pénzügyek legtöbbet hivatkozott fogalma, a misztikus Q mérték. A dolgozat megírásának legfontosabb indoka az volt, hogy megpróbáltam kiiktatni a megengedett portfólió fogalmát a származtatott termékek árazásának elméletéből. Miként közismert, a származtatott termékek árazásának elmélete a fedezés fogalmára épül. (...) ____ In the article the author discusses some problems of the existence of the martingale measure. In continuous time models one should restrict the set of self financing portfolios and introduce the concept of the admissible portfolios. But to define the admissible portfolios one should either define them under the martingale measure or to turn the set of admissible portfolios to a cone which makes the interpretation of the pricing formula difficult.
Resumo:
Access to healthcare is a major problem in which patients are deprived of receiving timely admission to healthcare. Poor access has resulted in significant but avoidable healthcare cost, poor quality of healthcare, and deterioration in the general public health. Advanced Access is a simple and direct approach to appointment scheduling in which the majority of a clinic's appointments slots are kept open in order to provide access for immediate or same day healthcare needs and therefore, alleviate the problem of poor access the healthcare. This research formulates a non-linear discrete stochastic mathematical model of the Advanced Access appointment scheduling policy. The model objective is to maximize the expected profit of the clinic subject to constraints on minimum access to healthcare provided. Patient behavior is characterized with probabilities for no-show, balking, and related patient choices. Structural properties of the model are analyzed to determine whether Advanced Access patient scheduling is feasible. To solve the complex combinatorial optimization problem, a heuristic that combines greedy construction algorithm and neighborhood improvement search was developed. The model and the heuristic were used to evaluate the Advanced Access patient appointment policy compared to existing policies. Trade-off between profit and access to healthcare are established, and parameter analysis of input parameters was performed. The trade-off curve is a characteristic curve and was observed to be concave. This implies that there exists an access level at which at which the clinic can be operated at optimal profit that can be realized. The results also show that, in many scenarios by switching from existing scheduling policy to Advanced Access policy clinics can improve access without any decrease in profit. Further, the success of Advanced Access policy in providing improved access and/or profit depends on the expected value of demand, variation in demand, and the ratio of demand for same day and advanced appointments. The contributions of the dissertation are a model of Advanced Access patient scheduling, a heuristic to solve the model, and the use of the model to understand the scheduling policy trade-offs which healthcare clinic managers must make. ^
Resumo:
Recent advances in airborne Light Detection and Ranging (LIDAR) technology allow rapid and inexpensive measurements of topography over large areas. Airborne LIDAR systems usually return a 3-dimensional cloud of point measurements from reflective objects scanned by the laser beneath the flight path. This technology is becoming a primary method for extracting information of different kinds of geometrical objects, such as high-resolution digital terrain models (DTMs), buildings and trees, etc. In the past decade, LIDAR gets more and more interest from researchers in the field of remote sensing and GIS. Compared to the traditional data sources, such as aerial photography and satellite images, LIDAR measurements are not influenced by sun shadow and relief displacement. However, voluminous data pose a new challenge for automated extraction the geometrical information from LIDAR measurements because many raster image processing techniques cannot be directly applied to irregularly spaced LIDAR points. ^ In this dissertation, a framework is proposed to filter out information about different kinds of geometrical objects, such as terrain and buildings from LIDAR automatically. They are essential to numerous applications such as flood modeling, landslide prediction and hurricane animation. The framework consists of several intuitive algorithms. Firstly, a progressive morphological filter was developed to detect non-ground LIDAR measurements. By gradually increasing the window size and elevation difference threshold of the filter, the measurements of vehicles, vegetation, and buildings are removed, while ground data are preserved. Then, building measurements are identified from no-ground measurements using a region growing algorithm based on the plane-fitting technique. Raw footprints for segmented building measurements are derived by connecting boundary points and are further simplified and adjusted by several proposed operations to remove noise, which is caused by irregularly spaced LIDAR measurements. To reconstruct 3D building models, the raw 2D topology of each building is first extracted and then further adjusted. Since the adjusting operations for simple building models do not work well on 2D topology, 2D snake algorithm is proposed to adjust 2D topology. The 2D snake algorithm consists of newly defined energy functions for topology adjusting and a linear algorithm to find the minimal energy value of 2D snake problems. Data sets from urbanized areas including large institutional, commercial, and small residential buildings were employed to test the proposed framework. The results demonstrated that the proposed framework achieves a very good performance. ^
Resumo:
This research aims at a study of the hybrid flow shop problem which has parallel batch-processing machines in one stage and discrete-processing machines in other stages to process jobs of arbitrary sizes. The objective is to minimize the makespan for a set of jobs. The problem is denoted as: FF: batch1,sj:Cmax. The problem is formulated as a mixed-integer linear program. The commercial solver, AMPL/CPLEX, is used to solve problem instances to their optimality. Experimental results show that AMPL/CPLEX requires considerable time to find the optimal solution for even a small size problem, i.e., a 6-job instance requires 2 hours in average. A bottleneck-first-decomposition heuristic (BFD) is proposed in this study to overcome the computational (time) problem encountered while using the commercial solver. The proposed BFD heuristic is inspired by the shifting bottleneck heuristic. It decomposes the entire problem into three sub-problems, and schedules the sub-problems one by one. The proposed BFD heuristic consists of four major steps: formulating sub-problems, prioritizing sub-problems, solving sub-problems and re-scheduling. For solving the sub-problems, two heuristic algorithms are proposed; one for scheduling a hybrid flow shop with discrete processing machines, and the other for scheduling parallel batching machines (single stage). Both consider job arrival and delivery times. An experiment design is conducted to evaluate the effectiveness of the proposed BFD, which is further evaluated against a set of common heuristics including a randomized greedy heuristic and five dispatching rules. The results show that the proposed BFD heuristic outperforms all these algorithms. To evaluate the quality of the heuristic solution, a procedure is developed to calculate a lower bound of makespan for the problem under study. The lower bound obtained is tighter than other bounds developed for related problems in literature. A meta-search approach based on the Genetic Algorithm concept is developed to evaluate the significance of further improving the solution obtained from the proposed BFD heuristic. The experiment indicates that it reduces the makespan by 1.93 % in average within a negligible time when problem size is less than 50 jobs.
Resumo:
In the discussion - Ethics, Value Systems And The Professionalization Of Hoteliers by K. Michael Haywood, Associate Professor, School of Hotel and Food Administration, University of Guelph, Haywood initially presents: “Hoteliers and executives in other service industries should realize that the foundation of success in their businesses is based upon personal and corporate value systems and steady commitment to excellence. The author illustrates how ethical issues and manager morality are linked to, and shaped by the values of executives and the organization, and how improved professionalism can only be achieved through the adoption of a value system that rewards contributions rather than the mere attainment of results.” The bottom line of this discussion is, how does the hotel industry reconcile its behavior with that of public perception? “The time has come for hoteliers to examine their own standards of ethics, value systems, and professionalism,” Haywood says. And it is ethics that are at the center of this issue; Haywood holds that component in an estimable position. “Hoteliers must become value-driven,” advises Haywood. “They must be committed to excellence both in actualizing their best potentialities and in excelling in all they do. In other words, the professionalization of the hotelier can be achieved through a high degree of self-control, internalized values, codes of ethics, and related socialization processes,” he expands. “Serious ethical issues exist for hoteliers as well as for many business people and professionals in positions of responsibility,” Haywood alludes in defining some inter-industry problems. “The acceptance of kickbacks and gifts from suppliers, the hiding of income from taxation authorities, the lack of interest in installing and maintaining proper safety and security systems, and the raiding of competitors' staffs are common practices,” he offers, with the reasoning that if these problems can occur within ranks, then there is going to be a negative backlash in the public/client arena as well. Haywood divides the key principles of his thesis statement - ethics, value systems, and professionalism – into specific elements, and then continues to broaden the scope of each element. Promotion, product/service, and pricing are additional key components in Haywood’s discussion, and he addresses each with verve and vitality. Haywood references the four character types - craftsmen, jungle fighters, company men, and gamesmen – via a citation to Michael Maccoby, in the portion of the discussion dedicated to morality and success. Haywood closes with a series of questions derived from Lawrence Miller's American Spirit, Visions of a New Corporate Culture, each question designed to focus, shape, and organize management's attention to the values that Miller sets forth in his piece.
Resumo:
Service supply chain (SSC) has attracted more and more attention from academia and industry. Although there exists extensive product-based supply chain management models and methods, they are not applicable to the SSC as the differences between service and product. Besides, the existing supply chain management models and methods possess some common deficiencies. Because of the above reasons, this paper develops a novel value-oriented model for the management of SSC using the modeling methods of E3-value and Use Case Maps (UCMs). This model can not only resolve the problems of applicability and effectiveness of the existing supply chain management models and methods, but also answer the questions of ‘why the management model is this?’ and ‘how to quantify the potential profitability of the supply chains?’. Meanwhile, the service business processes of SSC system can be established using its logic procedure. In addition, the model can also determine the value and benefits distribution of the entire service value chain and optimize the operations management performance of the service supply.
Resumo:
The unprecedented and relentless growth in the electronics industry is feeding the demand for integrated circuits (ICs) with increasing functionality and performance at minimum cost and power consumption. As predicted by Moore's law, ICs are being aggressively scaled to meet this demand. While the continuous scaling of process technology is reducing gate delays, the performance of ICs is being increasingly dominated by interconnect delays. In an effort to improve submicrometer interconnect performance, to increase packing density, and to reduce chip area and power consumption, the semiconductor industry is focusing on three-dimensional (3D) integration. However, volume production and commercial exploitation of 3D integration are not feasible yet due to significant technical hurdles.
At the present time, interposer-based 2.5D integration is emerging as a precursor to stacked 3D integration. All the dies and the interposer in a 2.5D IC must be adequately tested for product qualification. However, since the structure of 2.5D ICs is different from the traditional 2D ICs, new challenges have emerged: (1) pre-bond interposer testing, (2) lack of test access, (3) limited ability for at-speed testing, (4) high density I/O ports and interconnects, (5) reduced number of test pins, and (6) high power consumption. This research targets the above challenges and effective solutions have been developed to test both dies and the interposer.
The dissertation first introduces the basic concepts of 3D ICs and 2.5D ICs. Prior work on testing of 2.5D ICs is studied. An efficient method is presented to locate defects in a passive interposer before stacking. The proposed test architecture uses e-fuses that can be programmed to connect or disconnect functional paths inside the interposer. The concept of a die footprint is utilized for interconnect testing, and the overall assembly and test flow is described. Moreover, the concept of weighted critical area is defined and utilized to reduce test time. In order to fully determine the location of each e-fuse and the order of functional interconnects in a test path, we also present a test-path design algorithm. The proposed algorithm can generate all test paths for interconnect testing.
In order to test for opens, shorts, and interconnect delay defects in the interposer, a test architecture is proposed that is fully compatible with the IEEE 1149.1 standard and relies on an enhancement of the standard test access port (TAP) controller. To reduce test cost, a test-path design and scheduling technique is also presented that minimizes a composite cost function based on test time and the design-for-test (DfT) overhead in terms of additional through silicon vias (TSVs) and micro-bumps needed for test access. The locations of the dies on the interposer are taken into consideration in order to determine the order of dies in a test path.
To address the scenario of high density of I/O ports and interconnects, an efficient built-in self-test (BIST) technique is presented that targets the dies and the interposer interconnects. The proposed BIST architecture can be enabled by the standard TAP controller in the IEEE 1149.1 standard. The area overhead introduced by this BIST architecture is negligible; it includes two simple BIST controllers, a linear-feedback-shift-register (LFSR), a multiple-input-signature-register (MISR), and some extensions to the boundary-scan cells in the dies on the interposer. With these extensions, all boundary-scan cells can be used for self-configuration and self-diagnosis during interconnect testing. To reduce the overall test cost, a test scheduling and optimization technique under power constraints is described.
In order to accomplish testing with a small number test pins, the dissertation presents two efficient ExTest scheduling strategies that implements interconnect testing between tiles inside an system on chip (SoC) die on the interposer while satisfying the practical constraint that the number of required test pins cannot exceed the number of available pins at the chip level. The tiles in the SoC are divided into groups based on the manner in which they are interconnected. In order to minimize the test time, two optimization solutions are introduced. The first solution minimizes the number of input test pins, and the second solution minimizes the number output test pins. In addition, two subgroup configuration methods are further proposed to generate subgroups inside each test group.
Finally, the dissertation presents a programmable method for shift-clock stagger assignment to reduce power supply noise during SoC die testing in 2.5D ICs. An SoC die in the 2.5D IC is typically composed of several blocks and two neighboring blocks that share the same power rails should not be toggled at the same time during shift. Therefore, the proposed programmable method does not assign the same stagger value to neighboring blocks. The positions of all blocks are first analyzed and the shared boundary length between blocks is then calculated. Based on the position relationships between the blocks, a mathematical model is presented to derive optimal result for small-to-medium sized problems. For larger designs, a heuristic algorithm is proposed and evaluated.
In summary, the dissertation targets important design and optimization problems related to testing of interposer-based 2.5D ICs. The proposed research has led to theoretical insights, experiment results, and a set of test and design-for-test methods to make testing effective and feasible from a cost perspective.
Resumo:
Bayesian nonparametric models, such as the Gaussian process and the Dirichlet process, have been extensively applied for target kinematics modeling in various applications including environmental monitoring, traffic planning, endangered species tracking, dynamic scene analysis, autonomous robot navigation, and human motion modeling. As shown by these successful applications, Bayesian nonparametric models are able to adjust their complexities adaptively from data as necessary, and are resistant to overfitting or underfitting. However, most existing works assume that the sensor measurements used to learn the Bayesian nonparametric target kinematics models are obtained a priori or that the target kinematics can be measured by the sensor at any given time throughout the task. Little work has been done for controlling the sensor with bounded field of view to obtain measurements of mobile targets that are most informative for reducing the uncertainty of the Bayesian nonparametric models. To present the systematic sensor planning approach to leaning Bayesian nonparametric models, the Gaussian process target kinematics model is introduced at first, which is capable of describing time-invariant spatial phenomena, such as ocean currents, temperature distributions and wind velocity fields. The Dirichlet process-Gaussian process target kinematics model is subsequently discussed for modeling mixture of mobile targets, such as pedestrian motion patterns.
Novel information theoretic functions are developed for these introduced Bayesian nonparametric target kinematics models to represent the expected utility of measurements as a function of sensor control inputs and random environmental variables. A Gaussian process expected Kullback Leibler divergence is developed as the expectation of the KL divergence between the current (prior) and posterior Gaussian process target kinematics models with respect to the future measurements. Then, this approach is extended to develop a new information value function that can be used to estimate target kinematics described by a Dirichlet process-Gaussian process mixture model. A theorem is proposed that shows the novel information theoretic functions are bounded. Based on this theorem, efficient estimators of the new information theoretic functions are designed, which are proved to be unbiased with the variance of the resultant approximation error decreasing linearly as the number of samples increases. Computational complexities for optimizing the novel information theoretic functions under sensor dynamics constraints are studied, and are proved to be NP-hard. A cumulative lower bound is then proposed to reduce the computational complexity to polynomial time.
Three sensor planning algorithms are developed according to the assumptions on the target kinematics and the sensor dynamics. For problems where the control space of the sensor is discrete, a greedy algorithm is proposed. The efficiency of the greedy algorithm is demonstrated by a numerical experiment with data of ocean currents obtained by moored buoys. A sweep line algorithm is developed for applications where the sensor control space is continuous and unconstrained. Synthetic simulations as well as physical experiments with ground robots and a surveillance camera are conducted to evaluate the performance of the sweep line algorithm. Moreover, a lexicographic algorithm is designed based on the cumulative lower bound of the novel information theoretic functions, for the scenario where the sensor dynamics are constrained. Numerical experiments with real data collected from indoor pedestrians by a commercial pan-tilt camera are performed to examine the lexicographic algorithm. Results from both the numerical simulations and the physical experiments show that the three sensor planning algorithms proposed in this dissertation based on the novel information theoretic functions are superior at learning the target kinematics with
little or no prior knowledge