117 resultados para problem of mediation
Resumo:
Foundation construction process has been an important key point in a successful construction engineering. The frequency of using diaphragm wall construction method among many deep excavation construction methods in Taiwan is the highest in the world. The traditional view of managing diaphragm wall unit in the sequencing of construction activities is to establish each phase of the sequencing of construction activities by heuristics. However, it conflicts final phase of engineering construction with unit construction and effects planning construction time. In order to avoid this kind of situation, we use management of science in the study of diaphragm wall unit construction to formulate multi-objective combinational optimization problem. Because the characteristic (belong to NP-Complete problem) of problem mathematic model is multi-objective and combining explosive, it is advised that using the 2-type Self-Learning Neural Network (SLNN) to solve the N=12, 24, 36 of diaphragm wall unit in the sequencing of construction activities program problem. In order to compare the liability of the results, this study will use random researching method in comparison with the SLNN. It is found that the testing result of SLNN is superior to random researching method in whether solution-quality or Solving-efficiency.
Resumo:
The problem of complexity is particularly relevant to the field of control engineering, since many engineering problems are inherently complex. The inherent complexity is such that straightforward computational problem solutions often produce very poor results. Although parallel processing can alleviate the problem to some extent, it is artificial neural networks (in various forms) which have recently proved particularly effective, even in dealing with the causes of the problem itself. This paper presents an overview of the current neural network research being undertaken. Such research aims to solve the complex problems found in many areas of science and engineering today.
Resumo:
The problem of a manipulator operating in a noisy workspace and required to move from an initial fixed position P0 to a final position Pf is considered. However, Pf is corrupted by noise, giving rise to Pˆf, which may be obtained by sensors. The use of learning automata is proposed to tackle this problem. An automaton is placed at each joint of the manipulator which moves according to the action chosen by the automaton (forward, backward, stationary) at each instant. The simultaneous reward or penalty of the automata enables avoiding any inverse kinematics computations that would be necessary if the distance of each joint from the final position had to be calculated. Three variable-structure learning algorithms are used, i.e., the discretized linear reward-penalty (DLR-P, the linear reward-penalty (LR-P ) and a nonlinear scheme. Each algorithm is separately tested with two (forward, backward) and three forward, backward, stationary) actions.
Resumo:
The authors consider the problem of a robot manipulator operating in a noisy workspace. The manipulator is required to move from an initial position P(i) to a final position P(f). P(i) is assumed to be completely defined. However, P(f) is obtained by a sensing operation and is assumed to be fixed but unknown. The authors approach to this problem involves the use of three learning algorithms, the discretized linear reward-penalty (DLR-P) automaton, the linear reward-penalty (LR-P) automaton and a nonlinear reinforcement scheme. An automaton is placed at each joint of the robot and by acting as a decision maker, plans the trajectory based on noisy measurements of P(f).
Resumo:
Kinetic constants for SO42− transport by upper and lower rat ileum in vitro have been determined by computer fitting of rate vs concentration data obtained using the everted sac technique. MoO42− inhibition of this transport is competitive, and kinetic constants for the inhibition were similarly determined. Transport is also inhibited by the anions WO42−, S2O32− and SeO42−, in the order . These anions have no effect on the transport of l-valine. Low SO42− transport rates were observed in sacs from animals fed a high-molybdenum diet. The significance of the results with respect to the problem of molybdate toxicity in animals is discussed, and related to the known protective effect of SO42−.
Resumo:
Co-combustion performance trials of Meat and Bone Meal (MBM) and peat were conducted using a bubbling fluidized bed (BFB) reactor. In the combustion performance trials the effects of the co-combustion of MBM and peat on flue gas emissions, bed fluidization, ash agglomeration tendency in the bed and the composition and quality of the ash were studied. MBM was mixed with peat at 6 levels between 15% and 100%. Emissions were predominantly below regulatory limits. CO concentrations in the flue gas only exceeded the 100 mg/m3 limit upon combustion of pure MBM. SO2 emissions were found to be over the limit of 50 mg/m3, while in all trials NOx emissions were below the limit of 300 mg/m3. The HCl content of the flue gases was found to vary near the limit of 30 mg/m3. VOCs however were within their limits. The problem of bed agglomeration was avoided when the bed temperature was about 850 °C and only 20% MBM was co-combusted. This study indicates that a pilot scale BFB reactor can, under optimum conditions, be operated within emission limits when MBM is used as a co-fuel with peat. This can provide a basis for further scale-up development work in industrial scale BFB applications
Resumo:
Although the role of the academic head of department (HoD) has always been important to university management and performance, an increasing significance given to bureaucracy, academic performance and productivity, and government accountability has greatly elevated the importance of this position. Previous research and anecdotal evidence suggests that as academics move into HoD roles, usually with little or no training, they experience a problem of struggling to adequately manage key aspects of their role. It is this problem – and its manifestations – that forms the research focus of this study. Based on the research question, “What are the career trajectories of academics who become HoDs in a selected post-1992 university?” the study aimed to achieve greater understanding of why academics become HoDs, what it is like being a HoD, and how the experience influences their future career plans. The study adopts an interpretive approach, in line with social constructivism. Edited topical life history interviews were undertaken with 17 male and female HoDs, from a range of disciplines, in a post-1992 UK university. These data were analysed using coding, categorisation and theme formation techniques and developing profiles of each of the respondents. The findings from this study suggest that academics who become HoDs not only need the capacity to assume a range of personal and professional identities, but need to regularly adopt and switch between them. Whether individuals can successfully balance and manage these multiple identities, or whether they experience major conflicts and difficulties within or between them, greatly affects their experiences of being a HoD and may influence their subsequent career decisions. It is claimed that the focus, approach and analytical framework - based on the interrelationships between the concepts of socialisation, identity and career trajectory - provide a distinct and original contribution to knowledge in this area. Although the results of this study cannot be generalised, the findings may help other individuals and institutions move towards a firmer understanding of the academic who becomes HoD - in relation to theory, practice and future research.
Resumo:
The Cold War in the late 1940s blunted attempts by the Truman administration to extend the scope of government in areas such as health care and civil rights. In California, the combined weakness of the Democratic Party in electoral politics and the importance of fellow travelers and communists in state liberal politics made the problem of how to advance the left at a time of heightened Cold War tensions particularly acute. Yet by the early 1960s a new generation of liberal politicians had gained political power in the Golden State and was constructing a greatly expanded welfare system as a way of cementing their hold on power. In this article I argue that the New Politics of the 1970s, shaped nationally by Vietnam and by the social upheavals of the 1960s over questions of race, gender, sexuality, and economic rights, possessed particular power in California because many activists drew on the longer-term experiences of a liberal politics receptive to earlier anti-Cold War struggles. A desire to use political involvement as a form of social networking had given California a strong Popular Front, and in some respects the power of new liberalism was an offspring of those earlier battles.
Resumo:
This paper proposes a practical approach to the enhancement of Quality of Service (QoS) routing by means of providing alternative or repair paths in the event of a breakage of a working path. The proposed scheme guarantees that every Protected Node (PN) is connected to a multi-repair path such that no further failure or breakage of single or double repair paths can cause any simultaneous loss of connectivity between an ingress node and an egress node. Links to be protected in an MPLS network are predefined and a Label Switched path (LSP) request involves the establishment of a working path. The use of multi-protection paths permits the formation of numerous protection paths allowing greater flexibility. Our analysis examined several methods including single, double and multi-repair routes and the prioritization of signals along the protected paths to improve the Quality of Service (QoS), throughput, reduce the cost of the protection path placement, delay, congestion and collision. Results obtained indicated that creating multi-repair paths and prioritizing packets reduces delay and increases throughput in which case the delays at the ingress/egress LSPs were low compared to when the signals had not been classified. Therefore the proposed scheme provided a means to improve the QoS in path restoration in MPLS using available network resources. Prioritizing the packets in the data plane has revealed that the amount of traffic transmitted using a medium and low priority Label Switch Paths (LSPs) does not have any impact on the explicit rate of the high priority LSP in which case the problem of a knock-on effect is eliminated.
Resumo:
The problem of water wave scattering by a circular ice floe, floating in fluid of finite depth, is formulated and solved numerically. Unlike previous investigations of such situations, here we allow the thickness of the floe (and the fluid depth) to vary axisymmetrically and also incorporate a realistic non-zero draught. A numerical approximation to the solution of this problem is obtained to an arbitrary degree of accuracy by combining a Rayleigh–Ritz approximation of the vertical motion with an appropriate variational principle. This numerical solution procedure builds upon the work of Bennets et al. (2007, J. Fluid Mech., 579, 413–443). As part of the numerical formulation, we utilize a Fourier cosine expansion of the azimuthal motion, resulting in a system of ordinary differential equations to solve in the radial coordinate for each azimuthal mode. The displayed results concentrate on the response of the floe rather than the scattered wave field and show that the effects of introducing the new features of varying floe thickness and a realistic draught are significant.
Resumo:
In terrestrial television transmission multiple paths of various lengths can occur between the transmitter and the receiver. Such paths occur because of reflections from objects outside the direct transmission path. The multipath signals arriving at the receiver are all detected along with the intended signal causing time displaced replicas called 'ghosts' to appear on the television picture. With an increasing number of people living within built up areas, ghosting is becoming commonplace and therefore deghosting is becoming increasingly important. This thesis uses a deterministic time domain approach to deghosting, resulting in a simple solution to the problem of removing ghosts. A new video detector is presented which reduces the synchronous detector local oscillator phase error, caused by any practical size of ghost, to a lower level than has ever previously been achieved. From the new detector, dispersion of the video signal is minimised and a known closed-form time domain description of the individual ghost components within the detected video is subsequently obtained. Developed from mathematical descriptions of the detected video, a new specific deghoster filter structure is presented which is capable of removing both inphase (I) and also the phase quadrature (Q) induced ghost signals derived from the VSB operation. The new deghoster filter requires much less hardware than any previous deghoster which is capable of removing both I and Q ghost components. A new channel identification algorithm was also required and written which is based upon simple correlation techniques to find the delay and complex amplitude characteristics of individual ghosts. The result of the channel identification is then passed to the new I and Q deghoster filter for ghost cancellation. Generated from the research work performed for this thesis, five papers have been published. D
Resumo:
Recently major processor manufacturers have announced a dramatic shift in their paradigm to increase computing power over the coming years. Instead of focusing on faster clock speeds and more powerful single core CPUs, the trend clearly goes towards multi core systems. This will also result in a paradigm shift for the development of algorithms for computationally expensive tasks, such as data mining applications. Obviously, work on parallel algorithms is not new per se but concentrated efforts in the many application domains are still missing. Multi-core systems, but also clusters of workstations and even large-scale distributed computing infrastructures provide new opportunities and pose new challenges for the design of parallel and distributed algorithms. Since data mining and machine learning systems rely on high performance computing systems, research on the corresponding algorithms must be on the forefront of parallel algorithm research in order to keep pushing data mining and machine learning applications to be more powerful and, especially for the former, interactive. To bring together researchers and practitioners working in this exciting field, a workshop on parallel data mining was organized as part of PKDD/ECML 2006 (Berlin, Germany). The six contributions selected for the program describe various aspects of data mining and machine learning approaches featuring low to high degrees of parallelism: The first contribution focuses the classic problem of distributed association rule mining and focuses on communication efficiency to improve the state of the art. After this a parallelization technique for speeding up decision tree construction by means of thread-level parallelism for shared memory systems is presented. The next paper discusses the design of a parallel approach for dis- tributed memory systems of the frequent subgraphs mining problem. This approach is based on a hierarchical communication topology to solve issues related to multi-domain computational envi- ronments. The forth paper describes the combined use and the customization of software packages to facilitate a top down parallelism in the tuning of Support Vector Machines (SVM) and the next contribution presents an interesting idea concerning parallel training of Conditional Random Fields (CRFs) and motivates their use in labeling sequential data. The last contribution finally focuses on very efficient feature selection. It describes a parallel algorithm for feature selection from random subsets. Selecting the papers included in this volume would not have been possible without the help of an international Program Committee that has provided detailed reviews for each paper. We would like to also thank Matthew Otey who helped with publicity for the workshop.
Resumo:
For linear multivariable time-invariant continuous or discrete-time singular systems it is customary to use a proportional feedback control in order to achieve a desired closed loop behaviour. Derivative feedback is rarely considered. This paper examines how derivative feedback in descriptor systems can be used to alter the structure of the system pencil under various controllability conditions. It is shown that derivative and proportional feedback controls can be constructed such that the closed loop system has a given form and is also regular and has index at most 1. This property ensures the solvability of the resulting system of dynamic-algebraic equations. The construction procedures used to establish the theory are based only on orthogonal matrix decompositions and can therefore be implemented in a numerically stable way. The problem of pole placement with derivative feedback alone and in combination with proportional state feedback is also investigated. A computational algorithm for improving the “conditioning” of the regularized closed loop system is derived.
Resumo:
This paper presents novel observer-based techniques for the estimation of flow demands in gas networks, from sparse pressure telemetry. A completely observable model is explored, constructed by incorporating difference equations that assume the flow demands are steady. Since the flow demands usually vary slowly with time, this is a reasonable approximation. Two techniques for constructing robust observers are employed: robust eigenstructure assignment and singular value assignment. These techniques help to reduce the effects of the system approximation. Modelling error may be further reduced by making use of known profiles for the flow demands. The theory is extended to deal successfully with the problem of measurement bias. The pressure measurements available are subject to constant biases which degrade the flow demand estimates, and such biases need to be estimated. This is achieved by constructing a further model variation that incorporates the biases into an augmented state vector, but now includes information about the flow demand profiles in a new form.
Resumo:
The problem of calculating the probability of error in a DS/SSMA system has been extensively studied for more than two decades. When random sequences are employed some conditioning must be done before the application of the central limit theorem is attempted, leading to a Gaussian distribution. The authors seek to characterise the multiple access interference as a random-walk with a random number of steps, for random and deterministic sequences. Using results from random-walk theory, they model the interference as a K-distributed random variable and use it to calculate the probability of error in the form of a series, for a DS/SSMA system with a coherent correlation receiver and BPSK modulation under Gaussian noise. The asymptotic properties of the proposed distribution agree with other analyses. This is, to the best of the authors' knowledge, the first attempt to propose a non-Gaussian distribution for the interference. The modelling can be extended to consider multipath fading and general modulation