943 resultados para Branch and bounds
Resumo:
Purpose - This study investigates the relationship marketing (RM) strategy of a retail bank and examines whether - after its implementation - customer relationships were strengthened through perceived improvements in the banking relationship and consequent loyalty towards the bank. Design/methodology/approach - A survey was conducted on two profitability segments, of which the more profitable segment had been directly exposed to a customer oriented RM strategy, whereas the less profitable segment had been subjected to more sales oriented marketing communications. Findings - No significant differences were found between the segments on customers’ evaluations of the service relationship or their loyalty toward the bank. Furthermore regression analysis revealed that relationship satisfaction was less important as a determinant of loyalty in the more profitable segment. Research limitations/implications - This study was conducted as a case study of one specific branch of a bank group in Finland, which limits the external validity of its results. It was not possible to ascertain if, or to what extent, customers of the more profitable segment had received the intended RM treatment. Other limitations are also discussed. Practical implications - Customer orientation is desirable within retail banking and more studies are needed on the differential drivers of loyalty across customer profitability segments. By identifying the aspects of a banking relationship that are more highly valued among more profitable customers than among less profitable customers, bank managers would be able to more effectively devise appropriate strategies for different segments. Originality/value - The study contributes to the RM literature and marketing of financial services by providing empirical evidence of the effects of RM activities on customer relationship perceptions in different profitability segments.
Resumo:
For p x n complex orthogonal designs in k variables, where p is the number of channels uses and n is the number of transmit antennas, the maximal rate L of the design is asymptotically half as n increases. But, for such maximal rate codes, the decoding delay p increases exponentially. To control the delay, if we put the restriction that p = n, i.e., consider only the square designs, then, the rate decreases exponentially as n increases. This necessitates the study of the maximal rate of the designs with restrictions of the form p = n+1, p = n+2, p = n+3 etc. In this paper, we study the maximal rate of complex orthogonal designs with the restrictions p = n+1 and p = n+2. We derive upper and lower bounds for the maximal rate for p = n+1 and p = n+2. Also for the case of p = n+1, we show that if the orthogonal design admit only the variables, their negatives and multiples of these by root-1 and zeros as the entries of the matrix (other complex linear combinations are not allowed), then the maximal rate always equals the lower bound.
Resumo:
Utilization bounds for Earliest Deadline First(EDF) and Rate Monotonic(RM) scheduling are known and well understood for uniprocessor systems. In this paper, we derive limits on similar bounds for the multiprocessor case, when the individual processors need not be identical. Tasks are partitioned among the processors and RM scheduling is assumed to be the policy used in individual processors. A minimum limit on the bounds for a 'greedy' class of algorithms is given and proved, since the actual value of the bound depends on the algorithm that allocates the tasks. We also derive the utilization bound of an algorithm which allocates tasks in decreasing order of utilization factors. Knowledge of such bounds allows us to carry out very fast schedulability tests although we are constrained by the fact that the tests are sufficient but not necessary to ensure schedulability.
Resumo:
Processor architects have a challenging task of evaluating a large design space consisting of several interacting parameters and optimizations. In order to assist architects in making crucial design decisions, we build linear regression models that relate Processor performance to micro-architecture parameters, using simulation based experiments. We obtain good approximate models using an iterative process in which Akaike's information criteria is used to extract a good linear model from a small set of simulations, and limited further simulation is guided by the model using D-optimal experimental designs. The iterative process is repeated until desired error bounds are achieved. We used this procedure to establish the relationship of the CPI performance response to 26 key micro-architectural parameters using a detailed cycle-by-cycle superscalar processor simulator The resulting models provide a significance ordering on all micro-architectural parameters and their interactions, and explain the performance variations of micro-architectural techniques.
Resumo:
Self-similarity, a concept taken from mathematics, is gradually becoming a keyword in musicology. Although a polysemic term, self-similarity often refers to the multi-scalar feature repetition in a set of relationships, and it is commonly valued as an indication for musical coherence and consistency . This investigation provides a theory of musical meaning formation in the context of intersemiosis, that is, the translation of meaning from one cognitive domain to another cognitive domain (e.g. from mathematics to music, or to speech or graphic forms). From this perspective, the degree of coherence of a musical system relies on a synecdochic intersemiosis: a system of related signs within other comparable and correlated systems. This research analyzes the modalities of such correlations, exploring their general and particular traits, and their operational bounds. Looking forward in this direction, the notion of analogy is used as a rich concept through its two definitions quoted by the Classical literature: proportion and paradigm, enormously valuable in establishing measurement, likeness and affinity criteria. Using quantitative qualitative methods, evidence is presented to justify a parallel study of different modalities of musical self-similarity. For this purpose, original arguments by Benoît B. Mandelbrot are revised, alongside a systematic critique of the literature on the subject. Furthermore, connecting Charles S. Peirce s synechism with Mandelbrot s fractality is one of the main developments of the present study. This study provides elements for explaining Bolognesi s (1983) conjecture, that states that the most primitive, intuitive and basic musical device is self-reference, extending its functions and operations to self-similar surfaces. In this sense, this research suggests that, with various modalities of self-similarity, synecdochic intersemiosis acts as system of systems in coordination with greater or lesser development of structural consistency, and with a greater or lesser contextual dependence.
Resumo:
General relativity has very specific predictions for the gravitational waveforms from inspiralling compact binaries obtained using the post-Newtonian (PN) approximation. We investigate the extent to which the measurement of the PN coefficients, possible with the second generation gravitational-wave detectors such as the Advanced Laser Interferometer Gravitational-Wave Observatory (LIGO) and the third generation gravitational-wave detectors such as the Einstein Telescope (ET), could be used to test post-Newtonian theory and to put bounds on a subclass of parametrized-post-Einstein theories which differ from general relativity in a parametrized sense. We demonstrate this possibility by employing the best inspiralling waveform model for nonspinning compact binaries which is 3.5PN accurate in phase and 3PN in amplitude. Within the class of theories considered, Advanced LIGO can test the theory at 1.5PN and thus the leading tail term. Future observations of stellar mass black hole binaries by ET can test the consistency between the various PN coefficients in the gravitational-wave phasing over the mass range of 11-44M(circle dot). The choice of the lower frequency cutoff is important for testing post-Newtonian theory using the ET. The bias in the test arising from the assumption of nonspinning binaries is indicated.
Resumo:
A distributed system is a collection of networked autonomous processing units which must work in a cooperative manner. Currently, large-scale distributed systems, such as various telecommunication and computer networks, are abundant and used in a multitude of tasks. The field of distributed computing studies what can be computed efficiently in such systems. Distributed systems are usually modelled as graphs where nodes represent the processors and edges denote communication links between processors. This thesis concentrates on the computational complexity of the distributed graph colouring problem. The objective of the graph colouring problem is to assign a colour to each node in such a way that no two nodes connected by an edge share the same colour. In particular, it is often desirable to use only a small number of colours. This task is a fundamental symmetry-breaking primitive in various distributed algorithms. A graph that has been coloured in this manner using at most k different colours is said to be k-coloured. This work examines the synchronous message-passing model of distributed computation: every node runs the same algorithm, and the system operates in discrete synchronous communication rounds. During each round, a node can communicate with its neighbours and perform local computation. In this model, the time complexity of a problem is the number of synchronous communication rounds required to solve the problem. It is known that 3-colouring any k-coloured directed cycle requires at least ½(log* k - 3) communication rounds and is possible in ½(log* k + 7) communication rounds for all k ≥ 3. This work shows that for any k ≥ 3, colouring a k-coloured directed cycle with at most three colours is possible in ½(log* k + 3) rounds. In contrast, it is also shown that for some values of k, colouring a directed cycle with at most three colours requires at least ½(log* k + 1) communication rounds. Furthermore, in the case of directed rooted trees, reducing a k-colouring into a 3-colouring requires at least log* k + 1 rounds for some k and possible in log* k + 3 rounds for all k ≥ 3. The new positive and negative results are derived using computational methods, as the existence of distributed colouring algorithms corresponds to the colourability of so-called neighbourhood graphs. The colourability of these graphs is analysed using Boolean satisfiability (SAT) solvers. Finally, this thesis shows that similar methods are applicable in capturing the existence of distributed algorithms for other graph problems, such as the maximal matching problem.
Resumo:
Utilization of the aryl-beta-glucosides salicin or arbutin in most wild-type strains of E. coli is achieved by a single-step mutational activation of the bgl operon. Shigella sonnei, a branch of the diverse E. coli strain tree, requires two sequential mutational steps for achieving salicin utilization as the bglB gene, encoding the phospho-beta-glucosidase B, harbors an inactivating insertion. We show that in a natural isolate of S. sonnei, transcriptional activation of the gene SSO1595, encoding a phospho-beta-glucosidase, enables salicin utilization with the permease function being provided by the activated bgl operon. SSO1595 is absent in most commensal strains of E. coli, but is present in extra-intestinal pathogens as bgcA, a component of the bgc operon that enables beta-glucoside utilization at low temperature. Salicin utilization in an E. coli bglB laboratory strain also requires a two-step activation process leading to expression of BglF, the PTS-associated permease encoded by the bgl operon and AscB, the phospho-beta-glucosidase B encoded by the silent asc operon. BglF function is needed since AscF is unable to transport beta-glucosides as it lacks the IIA domain involved in phopho-relay. Activation of the asc operon in the Sal(+) mutant is by a promoter-up mutation and the activated operon is subject to induction. The pathway to achieve salicin utilization is therefore diverse in these two evolutionarily related organisms; however, both show cooperation between two silent genetic systems to achieve a new metabolic capability under selection.
Resumo:
Various Tb theorems play a key role in the modern harmonic analysis. They provide characterizations for the boundedness of Calderón-Zygmund type singular integral operators. The general philosophy is that to conclude the boundedness of an operator T on some function space, one needs only to test it on some suitable function b. The main object of this dissertation is to prove very general Tb theorems. The dissertation consists of four research articles and an introductory part. The framework is general with respect to the domain (a metric space), the measure (an upper doubling measure) and the range (a UMD Banach space). Moreover, the used testing conditions are weak. In the first article a (global) Tb theorem on non-homogeneous metric spaces is proved. One of the main technical components is the construction of a randomization procedure for the metric dyadic cubes. The difficulty lies in the fact that metric spaces do not, in general, have a translation group. Also, the measures considered are more general than in the existing literature. This generality is genuinely important for some applications, including the result of Volberg and Wick concerning the characterization of measures for which the analytic Besov-Sobolev space embeds continuously into the space of square integrable functions. In the second article a vector-valued extension of the main result of the first article is considered. This theorem is a new contribution to the vector-valued literature, since previously such general domains and measures were not allowed. The third article deals with local Tb theorems both in the homogeneous and non-homogeneous situations. A modified version of the general non-homogeneous proof technique of Nazarov, Treil and Volberg is extended to cover the case of upper doubling measures. This technique is also used in the homogeneous setting to prove local Tb theorems with weak testing conditions introduced by Auscher, Hofmann, Muscalu, Tao and Thiele. This gives a completely new and direct proof of such results utilizing the full force of non-homogeneous analysis. The final article has to do with sharp weighted theory for maximal truncations of Calderón-Zygmund operators. This includes a reduction to certain Sawyer-type testing conditions, which are in the spirit of Tb theorems and thus of the dissertation. The article extends the sharp bounds previously known only for untruncated operators, and also proves sharp weak type results, which are new even for untruncated operators. New techniques are introduced to overcome the difficulties introduced by the non-linearity of maximal truncations.
Resumo:
Equatorial Indian Ocean is warmer in the east, has a deeper thermocline and mixed layer, and supports a more convective atmosphere than in the west. During certain years, the eastern Indian Ocean becomes unusually cold, anomalous winds blow from east to west along the equator and southeastward off the coast of Sumatra, thermocline and mixed layer lift up and the atmospheric convection gets suppressed. At the same time, western Indian Ocean becomes warmer and enhances atmospheric convection. This coupled ocean-atmospheric phenomenon in which convection, winds, sea surface temperature (SST) and thermocline take part actively is known as the Indian Ocean Dipole (IOD). Propagation of baroclinic Kelvin and Rossby waves excited by anomalous winds, play an important role in the development of SST anomalies associated with the IOD. Since mean thermocline in the Indian Ocean is deep compared to the Pacific, it was believed for a long time that the Indian Ocean is passive and merely responds to the atmospheric forcing. Discovery of the IOD and studies that followed demonstrate that the Indian Ocean can sustain its own intrinsic coupled ocean-atmosphere processes. About 50% percent of the IOD events in the past 100 years have co-occurred with El Nino Southern Oscillation (ENSO) and the other half independently. Coupled models have been able to reproduce IOD events and process experiments by such models – switching ENSO on and off – support the hypothesis based on observations that IOD events develop either in the presence or absence of ENSO. There is a general consensus among different coupled models as well as analysis of data that IOD events co-occurring during the ENSO are forced by a zonal shift in the descending branch of Walker cell over to the eastern Indian Ocean. Processes that initiate the IOD in the absence of ENSO are not clear, although several studies suggest that anomalies of Hadley circulation are the most probable forcing function. Impact of the IOD is felt in the vicinity of Indian Ocean as well as in remote regions. During IOD events, biological productivity of the eastern Indian Ocean increases and this in turn leads to death of corals over a large area.Moreover, the IOD affects rainfall over the maritime continent, Indian subcontinent, Australia and eastern Africa. The maritime continent and Australia suffer from deficit rainfall whereas India and east Africa receive excess. Despite the successful hindcast of the 2006 IOD by a coupled model, forecasting IOD events and their implications to rainfall variability remains a major challenge as understanding reasons behind an increase in frequency of IOD events in recent decades.
Resumo:
There is huge knowledge gap in our understanding of many terrestrial carbon cycle processes. In this paper, we investigate the bounds on terrestrial carbon uptake over India that arises solely due to CO (2) -fertilization. For this purpose, we use a terrestrial carbon cycle model and consider two extreme scenarios: unlimited CO2-fertilization is allowed for the terrestrial vegetation with CO2 concentration level at 735 ppm in one case, and CO2-fertilization is capped at year 1975 levels for another simulation. Our simulations show that, under equilibrium conditions, modeled carbon stocks in natural potential vegetation increase by 17 Gt-C with unlimited fertilization for CO2 levels and climate change corresponding to the end of 21st century but they decline by 5.5 Gt-C if fertilization is limited at 1975 levels of CO2 concentration. The carbon stock changes are dominated by forests. The area covered by natural potential forests increases by about 36% in the unlimited fertilization case but decreases by 15% in the fertilization-capped case. Thus, the assumption regarding CO2-fertilization has the potential to alter the sign of terrestrial carbon uptake over India. Our model simulations also imply that the maximum potential terrestrial sequestration over India, under equilibrium conditions and best case scenario of unlimited CO2-fertilization, is only 18% of the 21st century SRES A2 scenarios emissions from India. The limited uptake potential of the natural potential vegetation suggests that reduction of CO2 emissions and afforestation programs should be top priorities.
Resumo:
Let G be a simple, undirected, finite graph with vertex set V(G) and edge set E(C). A k-dimensional box is a Cartesian product of closed intervals a(1), b(1)] x a(2), b(2)] x ... x a(k), b(k)]. The boxicity of G, box(G) is the minimum integer k such that G can be represented as the intersection graph of k-dimensional boxes, i.e. each vertex is mapped to a k-dimensional box and two vertices are adjacent in G if and only if their corresponding boxes intersect. Let P = (S, P) be a poset where S is the ground set and P is a reflexive, anti-symmetric and transitive binary relation on S. The dimension of P, dim(P) is the minimum integer l such that P can be expressed as the intersection of t total orders. Let G(P) be the underlying comparability graph of P. It is a well-known fact that posets with the same underlying comparability graph have the same dimension. The first result of this paper links the dimension of a poset to the boxicity of its underlying comparability graph. In particular, we show that for any poset P, box(G(P))/(chi(G(P)) - 1) <= dim(P) <= 2box(G(P)), where chi(G(P)) is the chromatic number of G(P) and chi(G(P)) not equal 1. The second result of the paper relates the boxicity of a graph G with a natural partial order associated with its extended double cover, denoted as G(c). Let P-c be the natural height-2 poset associated with G(c) by making A the set of minimal elements and B the set of maximal elements. We show that box(G)/2 <= dim(P-c) <= 2box(G) + 4. These results have some immediate and significant consequences. The upper bound dim(P) <= 2box(G(P)) allows us to derive hitherto unknown upper bounds for poset dimension. In the other direction, using the already known bounds for partial order dimension we get the following: (I) The boxicity of any graph with maximum degree Delta is O(Delta log(2) Delta) which is an improvement over the best known upper bound of Delta(2) + 2. (2) There exist graphs with boxicity Omega(Delta log Delta). This disproves a conjecture that the boxicity of a graph is O(Delta). (3) There exists no polynomial-time algorithm to approximate the boxicity of a bipartite graph on n vertices with a factor of O(n(0.5-epsilon)) for any epsilon > 0, unless NP=ZPP.
Resumo:
Employing multiple base stations is an attractive approach to enhance the lifetime of wireless sensor networks. In this paper, we address the fundamental question concerning the limits on the network lifetime in sensor networks when multiple base stations are deployed as data sinks. Specifically, we derive upper bounds on the network lifetime when multiple base stations are employed, and obtain optimum locations of the base stations (BSs) that maximize these lifetime bounds. For the case of two BSs, we jointly optimize the BS locations by maximizing the lifetime bound using a genetic algorithm based optimization. Joint optimization for more number of BSs is complex. Hence, for the case of three BSs, we optimize the third BS location using the previously obtained optimum locations of the first two BSs. We also provide simulation results that validate the lifetime bounds and the optimum locations of the BSs.
Resumo:
Flexible cantilever pipes conveying fluids with high velocity are analysed for their dynamic response and stability behaviour. The Young's modulus and mass per unit length of the pipe material have a stochastic distribution. The stochastic fields, that model the fluctuations of Young's modulus and mass density are characterized through their respective means, variances and autocorrelation functions or their equivalent power spectral density functions. The stochastic non self-adjoint partial differential equation is solved for the moments of characteristic values, by treating the point fluctuations to be stochastic perturbations. The second-order statistics of vibration frequencies and mode shapes are obtained. The critical flow velocity is-first evaluated using the averaged eigenvalue equation. Through the eigenvalue equation, the statistics of vibration frequencies are transformed to yield critical flow velocity statistics. Expressions for the bounds of eigenvalues are obtained, which in turn yield the corresponding bounds for critical flow velocities.