937 resultados para Distribution transformer modeling
Resumo:
Background How accurately do people perceive extreme wind speeds and how does that perception affect the perceived risk? Prior research on human–wind interaction has focused on comfort levels in urban settings or knock-down thresholds. No systematic experimental research has attempted to assess people's ability to estimate extreme wind speeds and perceptions of their associated risks. Method We exposed 76 people to 10, 20, 30, 40, 50, and 60 mph (4.5, 8.9, 13.4, 17.9, 22.3, and 26.8 m/s) winds in randomized orders and asked them to estimate wind speed and the corresponding risk they felt. Results Multilevel modeling showed that people were accurate at lower wind speeds but overestimated wind speeds at higher levels. Wind speed perceptions mediated the direct relationship between actual wind speeds and perceptions of risk (i.e., the greater the perceived wind speed, the greater the perceived risk). The number of tropical cyclones people had experienced moderated the strength of the actual–perceived wind speed relationship; consequently, mediation was stronger for people who had experienced fewer storms. Conclusion These findings provide a clearer understanding of wind and risk perception, which can aid development of public policy solutions toward communicating the severity and risks associated with natural disasters.
Resumo:
Standard Monte Carlo (sMC) simulation models have been widely used in AEC industry research to address system uncertainties. Although the benefits of probabilistic simulation analyses over deterministic methods are well documented, the sMC simulation technique is quite sensitive to the probability distributions of the input variables. This phenomenon becomes highly pronounced when the region of interest within the joint probability distribution (a function of the input variables) is small. In such cases, the standard Monte Carlo approach is often impractical from a computational standpoint. In this paper, a comparative analysis of standard Monte Carlo simulation to Markov Chain Monte Carlo with subset simulation (MCMC/ss) is presented. The MCMC/ss technique constitutes a more complex simulation method (relative to sMC), wherein a structured sampling algorithm is employed in place of completely randomized sampling. Consequently, gains in computational efficiency can be made. The two simulation methods are compared via theoretical case studies.
Resumo:
Sediment samples from 13 sampling sites in Deception Bay, Australia were analysed for the presence of heavy metals. Enrichment factors, modified contamination indices and Nemerow pollution indices were calculated for each sampling site to determine sediment quality. The results indicate significant pollution of most sites by lead (average enrichment factor (EF) of 13), but there is also enrichment of arsenic (average EF 2.3), zinc (average EF 2.7) and other heavy metals. The modified degree of contamination indices (average 1.0) suggests that there is little contamination. By contrast, the Nemerow pollution index (average 5.8) suggests that Deception Bay is heavily contaminated. Cluster analysis was undertaken to identify groups of elements. Strong correlation between some elements and two distinct clusters of sampling sites based on sediment type was evident. These results have implications for pollution in complex marine environments where there is significant influx of sand and sediment into an estuarine environment.
Resumo:
Rapidly increasing electricity demands and capacity shortage of transmission and distribution facilities are the main driving forces for the growth of Distributed Generation (DG) integration in power grids. One of the reasons for choosing a DG is its ability to support voltage in a distribution system. Selection of effective DG characteristics and DG parameters is a significant concern of distribution system planners to obtain maximum potential benefits from the DG unit. This paper addresses the issue of improving the network voltage profile in distribution systems by installing a DG of the most suitable size, at a suitable location. An analytical approach is developed based on algebraic equations for uniformly distributed loads to determine the optimal operation, size and location of the DG in order to achieve required levels of network voltage. The developed method is simple to use for conceptual design and analysis of distribution system expansion with a DG and suitable for a quick estimation of DG parameters (such as optimal operating angle, size and location of a DG system) in a radial network. A practical network is used to verify the proposed technique and test results are presented.
Resumo:
Due to rapidly diminishing international supplies of fossil fuels, such as petroleum and diesel, the cost of fuel is constantly increasing, leading to higher costs of living, as a result of the significant reliance of many industries on motor vehicles. Many technologies have been developed to replace part or all of a fossil fuel with bio-fuels. One of the dual fuel technologies is fumigation of ethanol in diesel engines, which injects ethanol into the intake air stream of the engine. The advantage of this is that it avoids any costly modification of the engine high pressure diesel injection system, while reducing the volume of diesel required and potentially increasing the power output and efficiency. This paper investigates the performance of a diesel engine, converted to implement ethanol fumigation. The project will use both existing experimental data, along with generating computer modeled results using the program AVL Boost. The data from both experiments and the numerical simulation indicate desirable results for the peak pressure and the indicated mean effective pressure (IMEP). Increase in ethanol substitution resulted in elevated combustion pressure and an increase in the IMEP, while the variation of ethanol injection location resulted in negligible change. These increases in cylinder pressure led to a higher work output and total efficiency in the engine as the ethanol substitution was increased. In comparing the numerical and experimental results, the simulation showed a slight elevation, due to the inaccuracies in the heat release models. Future work is required to improve the combustion model and investigate the effect of the variation of the location of ethanol injection.
Resumo:
The porosity and pore size distribution of coals determine many of their properties, from gas release to their behavior on carbonization, and yet most methods of determining pore size distribution can only examine a restricted size range. Even then, only accessible pores can be investigated with these methods. Small-angle neutron scattering (SANS) and ultra small-angle neutron scattering (USANS) are increasingly used to characterize the size distribution of all of the pores non-destructively. Here we have used USANS/SANS to examine 24 well-characterized bituminous and subbituminous coals: three from the eastern US, two from Poland, one from New Zealand and the rest from the Sydney and Bowen Basins in Eastern Australia, and determined the relationships of the scattering intensity corresponding to different pore sizes with other coal properties. The range of pore radii examinable with these techniques is 2.5nm to 7μm. We confirm that there is a wide range of pore sizes in coal. The pore size distribution was found to be strongly affected by both rank and type (expressed as either hydrogen or vitrinite content) in the size range 250nm to 7μm and 5 to 10nm, but weakly in intermediate regions. The results suggest that different mechanisms control coal porosity on different scales. Contrast-matching USANS and SANS were also used to determine the size distribution of the fraction of the pores in these coals that are inaccessible to deuterated methane, CD4, at ambient temperature. In some coals most of the small (~10nm) pores were found to be inaccessible to CD4 on the time scale of the measurement (~30min–16h). This inaccessibility suggests that in these coals a considerable fraction of inherent methane may be trapped for extended periods of time, thus reducing the effectiveness of methane release from (or sorption by) these coals. Although the number of small pores was less in higher rank coals, the fraction of total pores that was inaccessible was not rank dependent. In the Australian coals, at the 10nm to 50nm size scales the pores in inertinites appeared to be completely accessible to CD4, whereas the pores in the vitrinite were about 75% inaccessible. Unlike the results for total porosity that showed no regional effects on relationships between porosity and coal properties, clear regional differences in the relationships between fraction of closed porosity and coal properties were found. The 10 to 50nm-sized pores of inertinites of the US and Polish coals examined appeared less accessible to methane than those of the inertinites of Australian coals. This difference in pore accessibility in inertinites may explain why empirical relationships between fluidity and coking properties developed using Carboniferous coals do not apply to Australian coals.
Resumo:
The catalytic action of putrescine specific amine oxidases acting in tandem with 4-aminobutyraldehyde dehydrogenase is explored as a degradative pathway in Rhodococcus opacus. By limiting the nitrogen source, increased catalytic activity was induced leading to a coordinated response in the oxidative deamination of putrescine to 4-aminobutyraldehyde and subsequent dehydrogenation to 4-aminobutyrate. Isolating the dehydrogenase by ion exchange chromatography and gel filtration revealed that the enzyme acts principally on linear aliphatic aldehydes possessing an amino moiety. Michaelis-Menten kinetic analysis delivered a Michaelis constant (KM=0.014mM) and maximum rate (Vmax=11.2μmol/min/mg) for the conversion of 4-aminobutyraldehyde to 4-aminobutyrate. The dehydrogenase identified by MALDI-TOF mass spectrometric analysis (E value=0.031, 23% coverage) belongs to a functionally related genomic cluster that includes the amine oxidase, suggesting their association in a directed cell response. Key regulatory, stress and transport encoding genes have been identified, along with candidate dehydrogenases and transaminases for the further conversion of 4-aminobutyrate to succinate. Genomic analysis has revealed highly similar metabolic gene clustering among members of Actinobacteria, providing insight into putrescine degradation notably among Micrococcaceae, Rhodococci and Corynebacterium by a pathway that was previously uncharacterised in bacteria.
Resumo:
This paper presents two novel nonlinear models of u-shaped anti-roll tanks for ships, and their linearizations. In addition, a third simplified nonlinear model is presented. The models are derived using Lagrangian mechanics. This formulation not only simplifies the modeling process, but also allows one to obtain models that satisfy energy-related physical properties. The proposed nonlinear models and their linearizations are validated using model-scale experimental data. Unlike other models in the literature, the nonlinear models in this paper are valid for large roll amplitudes. Even at moderate roll angles, the nonlinear models have three orders of magnitude lower mean square error relative to experimental data than the linear models.
Resumo:
Process models are usually depicted as directed graphs, with nodes representing activities and directed edges control flow. While structured processes with pre-defined control flow have been studied in detail, flexible processes including ad-hoc activities need further investigation. This paper presents flexible process graph, a novel approach to model processes in the context of dynamic environment and adaptive process participants’ behavior. The approach allows defining execution constraints, which are more restrictive than traditional ad-hoc processes and less restrictive than traditional control flow, thereby balancing structured control flow with unstructured ad-hoc activities. Flexible process graph focuses on what can be done to perform a process. Process participants’ routing decisions are based on the current process state. As a formal grounding, the approach uses hypergraphs, where each edge can associate any number of nodes. Hypergraphs are used to define execution semantics of processes formally. We provide a process scenario to motivate and illustrate the approach.
Resumo:
Industrial transformer is one of the most critical assets in the power and heavy industry. Failures of transformers can cause enormous losses. The poor joints of the electrical circuit on transformers can cause overheating and results in stress concentration on the structure which is the major cause of catastrophic failure. Few researches have been focused on the mechanical properties of industrial transformers under overheating thermal conditions. In this paper, both mechanical and thermal properties of industrial transformers are jointly investigated using Finite Element Analysis (FEA). Dynamic response analysis is conducted on a modified transformer FEA model, and the computational results are compared with experimental results from literature to validate this simulation model. Based on the FEA model, thermal stress is calculated under different temperature conditions. These analysis results can provide insights to the understanding of the failure of transformers due to overheating, therefore are significant to assess winding fault, especially to the manufacturing and maintenance of large transformers.
Resumo:
Lean construction and building information modeling (BIM) are quite different initiatives, but both are having profound impacts on the construction industry. A rigorous analysis of the myriad specific interactions between them indicates that a synergy exists which, if properly understood in theoretical terms, can be exploited to improve construction processes beyond the degree to which it might be improved by application of either of these paradigms independently. Using a matrix that juxtaposes BIM functionalities with prescriptive lean construction principles, 56 interactions have been identified, all but four of which represent constructive interaction. Although evidence for the majority of these has been found, the matrix is not considered complete but rather a framework for research to explore the degree of validity of the interactions. Construction executives, managers, designers, and developers of information technology systems for construction can also benefit from the framework as an aid to recognizing the potential synergies when planning their lean and BIM adoption strategies.
Resumo:
Unstable density-driven flow can lead to enhanced solute transport in groundwater. Only recently has the complex fingering pattern associated with free convection been documented in field settings. Electrical resistivity (ER) tomography has been used to capture a snapshot of convective instabilities at a single point in time, but a thorough transient analysis is still lacking in the literature. We present the results of a 2 year experimental study at a shallow aquifer in the United Arab Emirates that was designed to specifically explore the transient nature of free convection. ER tomography data documented the presence of convective fingers following a significant rainfall event. We demonstrate that the complex fingering pattern had completely disappeared a year after the rainfall event. The observation is supported by an analysis of the aquifer halite budget and hydrodynamic modeling of the transient character of the fingering instabilities. Modeling results show that the transient dynamics of the gravitational instabilities (their initial development, infiltration into the underlying lower-density groundwater, and subsequent decay) are in agreement with the timing observed in the time-lapse ER measurements. All experimental observations and modeling results are consistent with the hypothesis that a dense brine that infiltrated into the aquifer from a surficial source was the cause of free convection at this site, and that the finite nature of the dense brine source and dispersive mixing led to the decay of instabilities with time. This study highlights the importance of the transience of free convection phenomena and suggests that these processes are more rapid than was previously understood.
Resumo:
Designing systems for multiple stakeholders requires frequent collaboration with multiple stakeholders from the start. In many cases at least some stakeholders lack a professional habit of formal modeling. We report observations from two case studies of stakeholder-involvement in early design where non-formal techniques supported strong collaboration resulting in deep understanding of requirements and of the feasibility of solutions.
Resumo:
Motivation ?Task analysis for designing modern collaborative work needs a more fine grained approach. Especially in a complex task domain, like collaborative scientific authoring, when there is a single overall goal that can only be accomplished only by collaboration between multiple roles, each requiring its own expertise. We analyzed and re-considered roles, activities, and objects for design for complex collaboration contexts. Our main focus is on a generic approach to design for multiple roles and subtasks in a domain with a shared overall goal, which requires a detailed approach. Collaborative authoring is our current example. This research is incremental: an existing task analysis approach (GTA) is reconsidered by applying it to a case of complex collaboration. Our analysis shows that designing for collaboration indeed requires a refined approach to task modeling: GTA, in future, will need to consider tasks at the lowest level that can be delegated or mandates. These tasks need to be analyzed and redesigned in more in detail, along with the relevant task object.
Resumo:
Process choreographies describe interactions between different business partners and the dependencies between these interactions. While different proposals were made for capturing choreographies at an implementation level, it remains unclear how choreographies should be described on a conceptual level.While the Business Process Modeling Notation (BPMN) is already in use for describing choreographies in terms of interconnected interface behavior models, this paper will introduce interaction modeling using BPMN. Such interaction models do not suffer from incompatibility issues and are better suited for human modelers. BPMN extensions are proposed and a mapping from interaction models to interface behavior models is presented.