949 resultados para workload leave
Resumo:
Landslide is a kind of serious geological hazards and its damage is very great. In recent years, landslides become more and more frequent along with increase of scale of engineering constructions and cause greater loss. Consequently, how to protect landslides has become important research subject in the engineering field. This paper improves the method how to compute landslide thrust and solves the irrational problem in the design of piles because of the irrational landslide thrust according to the theory and technology of existed anti-slide piles and pre-stressed cable anti-slide piles. Modern pre-stressing technology has been introduced and load balancing method has been used to improve the stressing behavior of anti-slide piles. Anchor cables, anti-slide piles and modern pre-stressing technology have been used to prevention complicated landslide. It is an important base to select values for the landslide thrust. An improved method to calculate design thrust of anti-slide piles has been presented in this paper on the base of residual thrust method by comparing existing methods to select values of landslide thrust in the design of anti-slide piles. In the method, residual landslide thrust behind the anti-slide piles and residual skid resistance before the piles has been analyzed, equitable distribution of residual landslide thrust behind the piles has been realized, and the method to select value of design thrust becomes more reasonable. The pre-stressed cable anti-slide piles are developed from the common anti-slide piles and are common method to prevent landslide. Their principle is that internal force of anti-slide piles is adjusted and size of section is diminished by changing constraint conditions of anti-slide piles. For landslides with deep slip surface and large scale of slopes, limitation of the method appears. Such landslides are in need of long piles and anchor cables which are not only non-economic but also can generate larger deformation and leave potential danger after prevention. For solving the problem, a new kind of anti-slide piles, inner pre-stressing force anti-slide piles, is presented in this paper, and its principle is that an additional force, which is generated in the inner anti-slide piles by arranging pre-stressed reinforcement or tight wire in a certain form in interior of anti-slide piles and stretching the steel reinforcement or tight wire, may balance out the internal force induced by landslide thrust whole or partly (load balancing method). The method will change bending moment which anti-slide piles are not good at bearing into compressive stress which piles are good at bearing, improve stressing performance of anti-slide piles greatly, diminish size of section, and make anti-slide piles not fissured in the natural service or postpone appearance of the fissures, and improve viability of anti-slide piles. Pre-stressed cable anti-slide piles and inner pre-stressing force anti-slide piles go by the general name of pre-stressed structure anti-slide piles in the paper, and their design and calculation method is also analyzed. A new calculation method is provided in the paper for design of anti-slide piles. For pre-stressed structure anti-slide piles, a new computation mode is firstly presented in the paper on the foundation of cantilever piles. In the mode, constraint form of load-bearing section of the anti-slide piles should be confirmed according to reservoir conditions in order to figure out amount of pre-stress of the anchor cables, and internal force should be analyzed for the load-bearing section of pre-stressed structure anti-slide piles so as to confirm anchorage section of anti-slide piles. Pre-stressed cables of the pre-stressed cable anti-slide piles can be arranged as required. This paper analyzes the load-bearing section of single-row and double-row pre-stressed cable anti-slide piles and provides a calculation method for design of the pre-stressed cable anti-slide piles. Inner pre-stressing force anti-slide piles are a new kind of structural style. Their load-bearing section is divided into four computation modes according to whether pre-stressed cables are applied for exterior of the anti-slide piles, and whether single-row or double-row exterior pre-stressed cables are applied. The load balancing method is used to analyze the computation modes for providing a method to design the inner pre-stressing force anti-slide piles rationally. Pre-stressed cable anti-slide piles and inner pre-stressing force anti-slide piles are applied to research on Mahe landfall in Yalong Lenggu hydropower station by the improved method to select value of design thrust of anti-slide piles. A good effect is obtained in the analysis.
Resumo:
This paper is belonging to Chinese Petrochemical Corporation's key project. Although it is difficult and great workload, it has important theoretical and practical value. Its targets is to establish 4 dimension stress fields of complex fault block groups, and then to predict the forming mechanism and distribution rule of petroleum pools, by applying the most advanced theories, methods and technology and the most sophisticated software in highly explored zones. By means of multi-discipline theories, methods, technologies and multi-source information, using computer with maximum efforts, investigating the strata framework, structure framework, petroleum pool forming mechanism and forming mode of complex fault block groups, several results have been achieved as following: The fastigiated mode of Xianhe complex fault block groups was established, pointed out the control function of pool accumulate in Xianhe complex fault block groups Xianhe fastigiated complex fault block groups are the results of combining stress of extending, slipping and reversing, which formed in early Shahejie stage, changed and perplexed during Dongying stage and that control the forming and destruction of petroleum pools. By measuring the earth stress and rock mechanics parameters in the research region, the model of 4 dimension stress field and potential fields of migrating fluids was established from ES3 stage to current, with their space distribution and time evolve and petroleum accumulate. The fault-sealing model in Xianhe complex fault block groups was established, which reveal the sealing mechanism of petroleum about control-fault, made for petroleum pool prediction in complex fault block. The petroleum pool forming mode and mechanism in complex fault block was established. Petroleum distribution were predicted in three stress inverse zones, and remaining oil were point out in the high points of 2 micro-structures and the region with strong fault-sealing capabilities. (6). A set of theories, technology and methods of complex fault block petroleum pool have been developed, bring on an improvement of the development geology theory in continental fault depression lake basin, good economic benefits have been obtained by applying on both east and west areas of our country.
Resumo:
Guangxi Longtan Hydropower Station is not only a representative project of West Developing and Power Transmission from West to East in China, but also the second Hydropower Station to Three Gorges Project which is under construction in China. There are 770 X 104m3 creeping rock mass on the left bank slope in upper reaches, in which laid 9 water inlet tunnels and some underground plant buildings. Since the 435m high excavated slope threatens the security of the Dam, its deformation and stability is of great importance to the power station.Based on the Autodesk Map2004, Longtan Hydropower Station Monitoring Information System on Left Bank has been basically finished on the whole. Integrating the hydropower station monitoring information into Geographic Information System(GIS) environment, managers and engineers can dynamically gain the deformation information of the slop by query the symbols. By this means, designers can improve the correctness of analysis, and make a strategic and proper decision. Since the system is beneficial to effectively manage the monitoring-data, equitably save the cost of design and safe construction, and decrease the workload of the engineers, it is a successful application to the combination of hydropower station monitoring information management and computer information system technology.At the same time, on the basis of the geological analysis and rock mass toppling deformation and failure mechanism analysis of Longtan engineering left bank slope, the synthetic space-time analysis and influence factors analysis on the surface monitoring data and deep rock mass monitoring data of A-zone on left bank slope are carried on. It shows that the main intrinsic factor that effects the deformation of Zone A is the argillite limestone interbedding toppling structure, and its main external factors are rain and slope excavation. What's more, Degree of Reinforcement Demand(DRD) has been used to evaluate the slop reinforce effect of Zone A on left bank according to the Engineering Geomechanics-mate-Synthetics(EGMS). The result shows that the slop has been effective reinforced, and it is more stable after reinforce.At last, on the basis of contrasting with several forecast models, a synthetic forecast GRAV model has been presented and used to forecast the deformation of zone A on left bank in generating electricity period. The result indicates that GRAV model has good forecast precision, strong stability, and practical valuable reliability.
Resumo:
The Multifactor Leadership theory developed by Bass (1985) has become the new paradigm of leadership research. The empirical results of the effectiveness of transformational and transactional leadership in the literature, however, are not consentient. Researchers in China found the different structure of transformational leadership, but have not developed the transactional leadership. This study attempts to investigate three key questions in the unique Chinese socio-economic context: 1) what is the structure of transactional leadership in China? 2) What are the differences between western countries and China? And 3) what is the relationship between the transformational and transactional leadership mechanism? This study examines data collected from 3,500 participants, using Explored Factor Analysis (EFA), Confirmed Factor Analysis (CFA), Hierarchical Regression Analyses, partial correlations and other statistics methods. The major finings are listed as follows: Firstly, inductive methods was used to explore the structure of transactional leadership and the result show that transactional leadership is a four dimensions structure which includes contingent reward, contingent punishment , process control and anticipated investment. Reliability analysis, item analysis, EFA and CFA show the reliability and validity of the transactional leadership questionnaire we designed is good enough, the design of the item is effectively and properly. Contrast to other researches, anticipated investment emphasis on the leader’s recessive investment for subordinate, and this kind of transaction is quite special under the Chinese culture. While the content of the contingent reward with the contingent punishment is wider than the contingent reward in the western country, and the process control is wider than the management by exception and including goal setting and the management during the process. Secondly, hierarchical regression analyses showed that transformational and transactional leadership were significant positively related with in-role performance, extra-role performance, satisfaction and leadership effectiveness while negatively related to intention to leave. The effects of transactional and transformational leadership are different. Transactional leadership could significantly predict intention to leave controlling for transformational leadership, while transformational leadership could significantly predict in-role performance, extra-role performance, satisfaction and leadership effectiveness controlling for transactional leadership. Thirdly, the income level and the rank of subordinates are the moderators between the transformational, transactional leadership and leadership effectiveness. The leadership effectiveness of transactional leadership would decrease as the rank of subordinates increased, while the leadership effectiveness of transformational leadership would increase as the rank of subordinates increased. Transactional leadership is positively related to the effectiveness when the level of the subordinate income is low, but negatively related to the effectiveness when the level of the subordinate income is high. However the income level of the subordinate could not influence the leadership effectiveness of transformational leadership.
Resumo:
Interface has been becoming a more significant element today which influences the development of shopping on-line greatly. But in practice the attention arisen from society and study made are quite inadequate. Under this circumstance, I focus my study on the purpose of improving understanding of the engineering psychological factors, which definitely will play a crucial role in shopping on-line representation in future, and of the relations between them through the following experimental research. I hope it can give a basic reference to the practical application of shopping on-line representation pattern and continuous study. In current thesis, an analysis was made on the basis of engineering psychology principles from three aspects, i.e. person (users), task and information environment. It was considered that system overview and information behavior model would have great impact on the activities of users on the web and that representation pattern of information system would affect the forming of system overview and behavior pattern and then further after the performances of users in information system. Based on above-mentioned statement, a three-dimensional conceptual model was presented which demonstrates the relations between the crucial factors, which are media representation pattern, system hierarchy and objects in information unit. Thereafter, eight study hypothesis, which are about engineering psychological factors of virtual reality (VR) representation in shopping on-line system, was taken out and four experiments were followed up to testify the hypothesis. -In experiment one, a research was made to study how the three kinds of single media representation pattern influence the forming of system overview and information behavior from the point view of task performance, operating error, overall satisfactory and mental workload etc. -In experiment two, a study of how the combined media representation pattern of system hierarchy influences users' behavior was carried out. -In experiment three, a study of the hierarchy structure feature of VR representation pattern and the tendency of its width and depth to the effects of system behavior was made. -In experiment four, a study of the location relations between different parties in VR scene (information unit) was made. The result is as follows: -During structure dimensional state: Width-increasing caused more damage to the speed of users than depth-increasing in VR representation pattern. Although the performance of subjects was quite slow in wider environment, yet the percentage rate of causing errors was in lowest level. -During hierarchy representation pattern: 1. Between the representation patterns of the three media, no significant differences was found in terms of the speed of fulfilling the task, error rate, satisfactory, mental workload etc. But the pattern with figure- aided gained the worst results on all of these aspects. 2. During primary stage of the task and the first level of the hierarchy, the speed of subjects' performance in VR pattern was slower than that in text pattern. While with developing of the task and going deeper level of the hierarchy, the speed of users' performance in VR representation pattern reached to the highest level. 3. Effects in VR representation pattern was better than that in text pattern in higher level of the system. The representation pattern in highest level has greatest impact on the performance of the system behavior, whereas results of the only VR representation in the middle part of hierarchy would be worst. 4. Activity error in single media representation pattern was more than that in combined media representation pattern. 5. Individual differences among subjects had effects on the representation pattern of the system. During VR environment, behavior tendency of party A had a significant negative correlation to the quantities of errors. -In VR-scene representation: Physical-distance and flash influenced the subjects' task performance greatly, while psychological-distance has no outstanding impact. Subjects' accurate rate of performing increased if objects with same relation were in the same structure position, in the state of close psychological-distance or if the object target flashed (not reliable). Although the article limits the topic only on the present-existing questions and analysis of shopping-on-line, as a matter of fact, it can also apply for other relevant purposes on the web. While the study of this article only gives its emphasis on the researching-task with definite goal, making no consideration of other task conditions and their relations with other navigation tools. So I hope it lay a good start to make continuous research in this areas.
Resumo:
Purpose and rationale The purpose of the exploratory research is to provide a deeper understanding of how the work environment enhances or constrains organisational creativity (creativity and innovation) within the context of the advertising sector. The argument for the proposed research is that the contemporary literature is dominated by quantitative research instruments to measure the climate and work environment across many different sectors. The most influential theory within the extant literature is the componential theory of organisational creativity and innovation and is used as an analytical guide (Amabile, 1997; Figure 8) to conduct an ethnographic study within a creative advertising agency based in Scotland. The theory suggests that creative people (skills, expertise and task motivation) are influenced by the work environment in which they operate. This includes challenging work (+), work group supports (+), supervisory encouragement (+), freedom (+), sufficient resources (+), workload pressures (+ or -), organisational encouragement (+) and organisational impediments (-) which is argued enhances (+) or constrains (-) both creativity and innovation. An interpretive research design is conducted to confirm, challenge or extend the componential theory of organisational creativity and innovation (Amabile, 1997; Figure 8) and contribute to knowledge as well as practice. Design/methodology/approach The scholarly activity conducted within the context of the creative industries and advertising sector is in its infancy and research from the alternative paradigm using qualitative methods is limited which may provide new guidelines for this industry sector. As such, an ethnographic case study research design is a suitable methodology to provide a deeper understanding of the subject area and is consistent with a constructivist ontology and an interpretive epistemology. This ontological position is conducive to the researcher’s axiology and values in that meaning is not discovered as an objective truth but socially constructed from multiple realties from social actors. As such, ethnography is the study of people in naturally occurring settings and the creative advertising agency involved in the research is an appropriate purposive sample within an industry that is renowned for its creativity and innovation. Qualitative methods such as participant observation (field notes, meetings, rituals, social events and tracking a client brief), material artefacts (documents, websites, annual reports, emails, scrapbooks and photographic evidence) and focused interviews (informal and formal conversations, six taped and transcribed interviews and use of Survey Monkey) are used to provide a written account of the agency’s work environment. The analytical process of interpreting the ethnographic text is supported by thematic analysis (selective, axial and open coding) through the use of manual analysis and NVivo9 software Findings The findings highlight a complex interaction between the people within the agency and the enhancers and constraints of the work environment in which they operate. This involves the creative work environment (Amabile, 1997; Figure 8) as well as the physical work environment (Cain, 2012; Dul and Ceylan, 2011; Dul et al. 2011) and that of social control and power (Foucault, 1977; Gahan et al. 2007; Knights and Willmott, 2007). As such, the overarching themes to emerge from the data on how the work environment enhances or constrains organisational creativity include creative people (skills, expertise and task motivation), creative process (creative work environment and physical work environment) and creative power (working hours, value of creativity, self-fulfilment and surveillance). Therefore, the findings confirm that creative people interact and are influenced by aspects of the creative work environment outlined by Amabile (1997; Figure 8). However, the results also challenge and extend the theory to include that of the physical work environment and creative power. Originality/value/implications Methodologically, there is no other interpretive research that uses an ethnographic case study approach within the context of the advertising sector to explore and provide a deeper understanding of the subject area. As such, the contribution to knowledge in the form of a new interpretive framework (Figure 16) challenges and extends the existing body of knowledge (Amabile, 1997; Figure 8). Moreover, the contribution to practice includes a flexible set of industry guidelines (Appendix 13) that may be transferrable to other organisational settings.
Resumo:
Durbin, J. & Urquhart, C. (2003). Qualitative evaluation of KA24 (Knowledge Access 24). Aberystwyth: Department of Information Studies, University of Wales Aberystwyth. Sponsorship: Knowledge Access 24 (NHS)
Resumo:
Urquhart,C., Spink, S., Thomas, R. & Weightman, A. (2007). Developing a toolkit for assessing the impact of health library services on patient care. Report to LKDN (Libraries and Knowledge Development Network). Aberystwyth: Department of Information Studies, Aberystwyth University. Sponsorship: Libraries and Knowledge Development Network/ NHS
Resumo:
Dissertação de Mestrado apresentada à Universidade Fernando Pessoa como parte dos requisitos para obtenção do grau de Mestre em Ciências Empresariais.
Resumo:
Server performance has become a crucial issue for improving the overall performance of the World-Wide Web. This paper describes Webmonitor, a tool for evaluating and understanding server performance, and presents new results for a realistic workload. Webmonitor measures activity and resource consumption, both within the kernel and in HTTP processes running in user space. Webmonitor is implemented using an efficient combination of sampling and event-driven techniques that exhibit low overhead. Our initial implementation is for the Apache World-Wide Web server running on the Linux operating system. We demonstrate the utility of Webmonitor by measuring and understanding the performance of a Pentium-based PC acting as a dedicated WWW server. Our workload uses a file size distribution with a heavy tail. This captures the fact that Web servers must concurrently handle some requests for large audio and video files, and a large number of requests for small documents, containing text or images. Our results show that in a Web server saturated by client requests, over 90% of the time spent handling HTTP requests is spent in the kernel. Furthermore, keeping TCP connections open, as required by TCP, causes a factor of 2-9 increase in the elapsed time required to service an HTTP request. Data gathered from Webmonitor provide insight into the causes of this performance penalty. Specifically, we observe a significant increase in resource consumption along three dimensions: the number of HTTP processes running at the same time, CPU utilization, and memory utilization. These results emphasize the important role of operating system and network protocol implementation in determining Web server performance.
Resumo:
One role for workload generation is as a means for understanding how servers and networks respond to variation in load. This enables management and capacity planning based on current and projected usage. This paper applies a number of observations of Web server usage to create a realistic Web workload generation tool which mimics a set of real users accessing a server. The tool, called Surge (Scalable URL Reference Generator) generates references matching empirical measurements of 1) server file size distribution; 2) request size distribution; 3) relative file popularity; 4) embedded file references; 5) temporal locality of reference; and 6) idle periods of individual users. This paper reviews the essential elements required in the generation of a representative Web workload. It also addresses the technical challenges to satisfying this large set of simultaneous constraints on the properties of the reference stream, the solutions we adopted, and their associated accuracy. Finally, we present evidence that Surge exercises servers in a manner significantly different from other Web server benchmarks.
Resumo:
This paper examines how and why web server performance changes as the workload at the server varies. We measure the performance of a PC acting as a standalone web server, running Apache on top of Linux. We use two important tools to understand what aspects of software architecture and implementation determine performance at the server. The first is a tool that we developed, called WebMonitor, which measures activity and resource consumption, both in the operating system and in the web server. The second is the kernel profiling facility distributed as part of Linux. We vary the workload at the server along two important dimensions: the number of clients concurrently accessing the server, and the size of the documents stored on the server. Our results quantify and show how more clients and larger files stress the web server and operating system in different and surprising ways. Our results also show the importance of fixed costs (i.e., opening and closing TCP connections, and updating the server log) in determining web server performance.
Resumo:
We consider the problem of task assignment in a distributed system (such as a distributed Web server) in which task sizes are drawn from a heavy-tailed distribution. Many task assignment algorithms are based on the heuristic that balancing the load at the server hosts will result in optimal performance. We show this conventional wisdom is less true when the task size distribution is heavy-tailed (as is the case for Web file sizes). We introduce a new task assignment policy, called Size Interval Task Assignment with Variable Load (SITA-V). SITA-V purposely operates the server hosts at different loads, and directs smaller tasks to the lighter-loaded hosts. The result is that SITA-V provably decreases the mean task slowdown by significant factors (up to 1000 or more) where the more heavy-tailed the workload, the greater the improvement factor. We evaluate the tradeoff between improvement in slowdown and increase in waiting time in a system using SITA-V, and show conditions under which SITA-V represents a particularly appealing policy. We conclude with a discussion of the use of SITA-V in a distributed Web server, and show that it is attractive because it has a simple implementation which requires no communication from the server hosts back to the task router.
Resumo:
This paper presents a tool called Gismo (Generator of Internet Streaming Media Objects and workloads). Gismo enables the specification of a number of streaming media access characteristics, including object popularity, temporal correlation of request, seasonal access patterns, user session durations, user interactivity times, and variable bit-rate (VBR) self-similarity and marginal distributions. The embodiment of these characteristics in Gismo enables the generation of realistic and scalable request streams for use in the benchmarking and comparative evaluation of Internet streaming media delivery techniques. To demonstrate the usefulness of Gismo, we present a case study that shows the importance of various workload characteristics in determining the effectiveness of proxy caching and server patching techniques in reducing bandwidth requirements.
Resumo:
Commonly, research work in routing for delay tolerant networks (DTN) assumes that node encounters are predestined, in the sense that they are the result of unknown, exogenous processes that control the mobility of these nodes. In this paper, we argue that for many applications such an assumption is too restrictive: while the spatio-temporal coordinates of the start and end points of a node's journey are determined by exogenous processes, the specific path that a node may take in space-time, and hence the set of nodes it may encounter could be controlled in such a way so as to improve the performance of DTN routing. To that end, we consider a setting in which each mobile node is governed by a schedule consisting of a ist of locations that the node must visit at particular times. Typically, such schedules exhibit some level of slack, which could be leveraged for DTN message delivery purposes. We define the Mobility Coordination Problem (MCP) for DTNs as follows: Given a set of nodes, each with its own schedule, and a set of messages to be exchanged between these nodes, devise a set of node encounters that minimize message delivery delays while satisfying all node schedules. The MCP for DTNs is general enough that it allows us to model and evaluate some of the existing DTN schemes, including data mules and message ferries. In this paper, we show that MCP for DTNs is NP-hard and propose two detour-based approaches to solve the problem. The first (DMD) is a centralized heuristic that leverages knowledge of the message workload to suggest specific detours to optimize message delivery. The second (DNE) is a distributed heuristic that is oblivious to the message workload, and which selects detours so as to maximize node encounters. We evaluate the performance of these detour-based approaches using extensive simulations based on synthetic workloads as well as real schedules obtained from taxi logs in a major metropolitan area. Our evaluation shows that our centralized, workload-aware DMD approach yields the best performance, in terms of message delay and delivery success ratio, and that our distributed, workload-oblivious DNE approach yields favorable performance when compared to approaches that require the use of data mules and message ferries.