936 resultados para Prohibited operating zones


Relevância:

20.00% 20.00%

Publicador:

Resumo:

Velocity and absorption tomograms are the two most common forms of presentation of radar tomographic data. However, mining personnel, geophysicists included, are often unfamiliar with radar velocity and absorption. In this paper, general formulae are introduced, relating velocity and attenuation coefficient to conductivity and dielectric constant. The formulae are valid for lossy media as well as high-resistivity materials. The transformation of velocity and absorption to conductivity and dielectric constant is illustrated via application of the formulae to radar tomograms from the Hellyer zinc-lead-silver mine, Tasmania, Australia. The resulting conductivity and dielectric constant tomograms constructed at Hellyer demonstrated the potential of radar tomography to delineate sulphide ore zones. (C) 2001 Elsevier Science B.V. All rights reserved.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Recent years have seen the introduction of new and varied designs of activated sludge plants. With increasing needs for higher efficiencies and lower costs, the possibility of a plant that operates more effectively has created the need for tools that can be used to evaluate and compare designs at the design stage. One such tool is the operating space diagram. It is the aim of this paper to present this tool and demonstrate its application and relevance to design using a simple case study. In the case study, use of the operating space diagram suggested changes in design that would improve the flexibility of the process. It also was useful for designing suitable control strategies.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Objectives: Advances in surface electromyography (sEMG) techniques provide a clear indication that refinement of electrode location relative to innervation zones (IZ) is required in order to optimise the accuracy, relevance and repeatability of the sEMG signals. The aim of this study was to identify the IZ for the sternocleidomastoid and anterior scalene muscles to provide guidelines for electrode positioning for future clinical and research applications. Methods: Eleven volunteer subjects participated in this study. Myoelectric signals were detected from the sternal and clavicular heads of the stemocleidomastoid and the anterior scalene muscles bilaterally using a linear array of 8 electrodes during isometric cervical flexion contractions. The signals were reviewed and the IZ(s) were identified, marked on the subjects' skin and measurements were obtained relative to selected anatomical landmarks. Results: The position of the IZ lay consistently around the mid-point or in the superior portion of the muscles studied. Conclusions: Results suggest that electrodes should be positioned over the lower portion of the muscle and not the mid-point, which has been commonly used in previous studies. Recommendations for sensor placement on these muscles should assist investigators and clinicians to ensure improved validity in future sEMG applications. (C) 2002 Elsevier Science Ireland Ltd. All rights reserved.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

It has been argued that power-law time-to-failure fits for cumulative Benioff strain and an evolution in size-frequency statistics in the lead-up to large earthquakes are evidence that the crust behaves as a Critical Point (CP) system. If so, intermediate-term earthquake prediction is possible. However, this hypothesis has not been proven. If the crust does behave as a CP system, stress correlation lengths should grow in the lead-up to large events through the action of small to moderate ruptures and drop sharply once a large event occurs. However this evolution in stress correlation lengths cannot be observed directly. Here we show, using the lattice solid model to describe discontinuous elasto-dynamic systems subjected to shear and compression, that it is for possible correlation lengths to exhibit CP-type evolution. In the case of a granular system subjected to shear, this evolution occurs in the lead-up to the largest event and is accompanied by an increasing rate of moderate-sized events and power-law acceleration of Benioff strain release. In the case of an intact sample system subjected to compression, the evolution occurs only after a mature fracture system has developed. The results support the existence of a physical mechanism for intermediate-term earthquake forecasting and suggest this mechanism is fault-system dependent. This offers an explanation of why accelerating Benioff strain release is not observed prior to all large earthquakes. The results prove the existence of an underlying evolution in discontinuous elasto-dynamic, systems which is capable of providing a basis for forecasting catastrophic failure and earthquakes.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In order to understand the earthquake nucleation process, we need to understand the effective frictional behavior of faults with complex geometry and fault gouge zones. One important aspect of this is the interaction between the friction law governing the behavior of the fault on the microscopic level and the resulting macroscopic behavior of the fault zone. Numerical simulations offer a possibility to investigate the behavior of faults on many different scales and thus provide a means to gain insight into fault zone dynamics on scales which are not accessible to laboratory experiments. Numerical experiments have been performed to investigate the influence of the geometric configuration of faults with a rate- and state-dependent friction at the particle contacts on the effective frictional behavior of these faults. The numerical experiments are designed to be similar to laboratory experiments by DIETERICH and KILGORE (1994) in which a slide-hold-slide cycle was performed between two blocks of material and the resulting peak friction was plotted vs. holding time. Simulations with a flat fault without a fault gouge have been performed to verify the implementation. These have shown close agreement with comparable laboratory experiments. The simulations performed with a fault containing fault gouge have demonstrated a strong dependence of the critical slip distance D-c on the roughness of the fault surfaces and are in qualitative agreement with laboratory experiments.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Similarly to what has happened in other countries, since the early 1990s Portuguese companies have developed corporate environmental reporting practices in response to internal and external factors. This paper is based on empirical research directed to both the study of environmental reporting practices developed by Portuguese companies and the identification of the factors that explain the extent to which these companies disclose environmental information. This study focuses on the environmental disclosures made in the annual reports by a sample of 109 large firms operating in Portugal during the period 2002-04. Using the content analysis technique we have developed an index in order to assess the presence of the environmental disclosures in companies’ annual reports and their breadth. Based on the extant literature, several characteristics relating to firms’ attributes were selected and their influence on the level of environmental disclosure was tested empirically. The selected explanatory variables were firm size, industry membership, profitability, foreign ownership, quotation on the stock market and environmental certification. The results reveal that, in spite of the fact that the level of environmental information disclosed during the period 2002-04 is low, the extent of environmental disclosure has increased as well as the number of Portuguese companies that disclose environmental information. Moreover, the firm size and the fact that a company is listed on the stock market are positively related to the extent of environmental disclosure. This study adds to the international research on environmental disclosure by providing empirical data from a country, Portugal, where empirical evidence is still relatively unknown, extending the scope of the current understanding of the environmental reporting practices.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In this paper, we present results on the use of multilayered a-SiC:H heterostructures as a device for wavelength-division demultiplexing of optical signals. These devices are useful in optical communications applications that use the wavelength division multiplexing technique to encode multiple signals into the same transmission medium. The device is composed of two stacked p-i-n photodiodes, both optimized for the selective collection of photo generated carriers. Band gap engineering was used to adjust the photogeneration and recombination rate profiles of the intrinsic absorber regions of each photodiode to short and long wavelength absorption in the visible spectrum. The photocurrent signal using different input optical channels was analyzed at reverse and forward bias and under steady state illumination. A demux algorithm based on the voltage controlled sensitivity of the device was proposed and tested. An electrical model of the WDM device is presented and supported by the solution of the respective circuit equations.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Power system organization has gone through huge changes in the recent years. Significant increase in distributed generation (DG) and operation in the scope of liberalized markets are two relevant driving forces for these changes. More recently, the smart grid (SG) concept gained increased importance, and is being seen as a paradigm able to support power system requirements for the future. This paper proposes a computational architecture to support day-ahead Virtual Power Player (VPP) bid formation in the smart grid context. This architecture includes a forecasting module, a resource optimization and Locational Marginal Price (LMP) computation module, and a bid formation module. Due to the involved problems characteristics, the implementation of this architecture requires the use of Artificial Intelligence (AI) techniques. Artificial Neural Networks (ANN) are used for resource and load forecasting and Evolutionary Particle Swarm Optimization (EPSO) is used for energy resource scheduling. The paper presents a case study that considers a 33 bus distribution network that includes 67 distributed generators, 32 loads and 9 storage units.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In recent decades, all over the world, competition in the electric power sector has deeply changed the way this sector’s agents play their roles. In most countries, electric process deregulation was conducted in stages, beginning with the clients of higher voltage levels and with larger electricity consumption, and later extended to all electrical consumers. The sector liberalization and the operation of competitive electricity markets were expected to lower prices and improve quality of service, leading to greater consumer satisfaction. Transmission and distribution remain noncompetitive business areas, due to the large infrastructure investments required. However, the industry has yet to clearly establish the best business model for transmission in a competitive environment. After generation, the electricity needs to be delivered to the electrical system nodes where demand requires it, taking into consideration transmission constraints and electrical losses. If the amount of power flowing through a certain line is close to or surpasses the safety limits, then cheap but distant generation might have to be replaced by more expensive closer generation to reduce the exceeded power flows. In a congested area, the optimal price of electricity rises to the marginal cost of the local generation or to the level needed to ration demand to the amount of available electricity. Even without congestion, some power will be lost in the transmission system through heat dissipation, so prices reflect that it is more expensive to supply electricity at the far end of a heavily loaded line than close to an electric power generation. Locational marginal pricing (LMP), resulting from bidding competition, represents electrical and economical values at nodes or in areas that may provide economical indicator signals to the market agents. This article proposes a data-mining-based methodology that helps characterize zonal prices in real power transmission networks. To test our methodology, we used an LMP database from the California Independent System Operator for 2009 to identify economical zones. (CAISO is a nonprofit public benefit corporation charged with operating the majority of California’s high-voltage wholesale power grid.) To group the buses into typical classes that represent a set of buses with the approximate LMP value, we used two-step and k-means clustering algorithms. By analyzing the various LMP components, our goal was to extract knowledge to support the ISO in investment and network-expansion planning.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Lighting is one of the most important factors in human interaction with the environment. Poor lighting may increase the risk of accidents and could also cause a variety of symptoms including: rapid fatigue, headaches, eyestrain, tired eyes, dry eyes, ocular surface symptoms (watery and irritated eyes), decreased concentration and stress. Specific disorders: degeneration of the sharpness of vision (blurred and double vision) and slowness in changing focus. Apart from the advantages in the health and welfare for the workers, good lighting also leads to better job performance (faster), less errors, better safety, fewer accidents and less absenteeism. The overall effect is: better productivity. Good lighting includes quantity and quality requirements, and should necessarily be appropriate to the activity/task being carried out, bearing in mind the comfort and visual efficiency of the worker.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Purpose - The study evaluates the pre- and post-training lesion localisation ability of a group of novice observers. Parallels are drawn with the performance of inexperienced radiographers taking part in preliminary clinical evaluation (PCE) and ‘red-dot’ systems, operating within radiography practice. Materials and methods - Thirty-four novice observers searched 92 images for simulated lesions. Pre-training and post-training evaluations were completed following the free-response the receiver operating characteristic (FROC) method. Training consisted of observer performance methodology, the characteristics of the simulated lesions and information on lesion frequency. Jackknife alternative FROC (JAFROC) and highest rating inferred ROC analyses were performed to evaluate performance difference on lesion-based and case-based decisions. The significance level of the test was set at 0.05 to control the probability of Type I error. Results - JAFROC analysis (F(3,33) = 26.34, p < 0.0001) and highest-rating inferred ROC analysis (F(3,33) = 10.65, p = 0.0026) revealed a statistically significant difference in lesion detection performance. The JAFROC figure-of-merit was 0.563 (95% CI 0.512,0.614) pre-training and 0.677 (95% CI 0.639,0.715) post-training. Highest rating inferred ROC figure-of-merit was 0.728 (95% CI 0.701,0.755) pre-training and 0.772 (95% CI 0.750,0.793) post-training. Conclusions - This study has demonstrated that novice observer performance can improve significantly. This study design may have relevance in the assessment of inexperienced radiographers taking part in PCE or commenting scheme for trauma.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Embedded systems are increasingly complex and dynamic, imposing progressively higher developing time and costs. Tuning a particular system for deployment is thus becoming more demanding. Furthermore when considering systems which have to adapt themselves to evolving requirements and changing service requests. In this perspective, run-time monitoring of the system behaviour becomes an important requirement, allowing to dynamically capturing the actual scheduling progress and resource utilization. For this to succeed, operating systems need to expose their internal behaviour and state, making it available to external applications, and a runtime monitoring mechanism must be available. However, such mechanism can impose a burden in the system itself if not wisely used. In this paper we explore this problem and propose a framework, which is intended to provide this run-time mechanism whilst achieving code separation, run-time efficiency and flexibility for the final developer.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Our day-to-day life is dependent on several embedded devices, and in the near future, many more objects will have computation and communication capabilities enabling an Internet of Things. Correspondingly, with an increase in the interaction of these devices around us, developing novel applications is set to become challenging with current software infrastructures. In this paper, we argue that a new paradigm for operating systems needs to be conceptualized to provide aconducive base for application development on Cyber-physical systems. We demonstrate its need and importance using a few use-case scenarios and provide the design principles behind, and an architecture of a co-operating system or CoS that can serve as an example of this new paradigm.