885 resultados para Location-dependent control-flow patterns
Resumo:
The South Florida Water Management District (SFWMD) is responsible for managing over 2500 miles of waterways and hundreds of water control structures. Many of these control structures are experiencing erosion, known as scour, of the sediment downstream of the structure. Laboratory experiments were conducted in order to investigate the effectiveness of two-dimensional air diffusers and plate extensions (without air injection) on a 1/30 scale model of one of SFWMD gated spillway structures, the S65E gated spillway. A literature review examining the results of similar studies was conducted. The experimental design for this research was based off of previous work done on the same model. Scour of the riverbed downstream of gated spillway structures has the potential to cause serious damage, as it can expose the foundation of the structure, which can lead to collapse. This type of scour has been studied previously, but it continues to pose a risk to water control structures and needs to be studied further. The hydraulic scour channel used to conduct experiments contains a head tank, flow straighteners, gated spillway, stilling basin, scour chamber, sediment trap, and tailwater tank. Experiments were performed with two types of air diffusers. The first was a hollow, acrylic, triangular end sill with air injection holes on the upstream face, allowing for air injection upstream. The second diffuser was a hollow, acrylic rectangle that extended from the triangular end sill with air injection holes in the top face, allowing for vertical air injection, perpendicular to flow. Detailed flow and bed measurements were taken for six trials for each diffuser ranging from no air injection to 5 rows of 70 holes of 0.04" diameter. It was found that with both diffusers, the maximum amount of air injection reduced scour the most. Detailed velocity measurements were taken for each case and turbulence statistics were analyzed to determine why air injection reduces scour. It was determined that air injection reduces streamwise velocity and turbulence. Another set of experiments was performed using an acrylic extension plate with no air injection to minimize energy costs. Ten different plate lengths were tested. It was found that the location of deepest scour moved further downstream with each plate length. The 32-cm plate is recommended here. Detailed velocity measurements were taken after the cases with the 32-cm plate and no plate had reached equilibrium. This was done to better understand the flow patterns in order to determine what causes the scour reduction with the extension plates. The extension plate reduces the volume of scour, but more importantly translates the deepest point of scour downstream from the structure, lessening the risk of damage.
Resumo:
The research described in this paper is directed toward increasing productivity of draglines through automation. In particular, it focuses on the swing-to-dump, dump, and return-to-dig phases of the dragline operational cycle by developing a swing automation system. In typical operation the dragline boom can be in motion for up to 80% of the total cycle time. This provides considerable scope for improving cycle time through automated or partially automated boom motion control. This paper describes machine vision based sensor technology and control algorithms under development to solve the problem of continuous real time bucket location and control. Incorporation of this capability into existing dragline control systems will then enable true automation of dragline swing and dump operations.
Resumo:
Component software has many benefits, most notably increased software re-use; however, the component software process places heavy burdens on programming language technology, which modern object-oriented programming languages do not address. In particular, software components require specifications that are both sufficiently expressive and sufficiently abstract, and, where possible, these specifications should be checked formally by the programming language. This dissertation presents a programming language called Mentok that provides two novel programming language features enabling improved specification of stateful component roles. Negotiable interfaces are interface types extended with protocols, and allow specification of changing method availability, including some patterns of out-calls and re-entrance. Type layers are extensions to module signatures that allow specification of abstract control flow constraints through the interfaces of a component-based application. Development of Mentok's unique language features included creation of MentokC, the Mentok compiler, and formalization of key properties of Mentok in mini-languages called MentokP and MentokL.
Resumo:
We have developed a bioreactor vessel design which has the advantages of simplicity and ease of assembly and disassembly, and with the appropriately determined flow rate, even allows for a scaffold to be suspended freely regardless of its weight. This article reports our experimental and numerical investigations to evaluate the performance of a newly developed non-perfusion conical bioreactor by visualizing the flow through scaffolds with 45° and 90° fiber lay down patterns. The experiments were conducted at the Reynolds numbers (Re) 121, 170, and 218 based on the local velocity and width of scaffolds. The flow fields were captured using short-time exposures of 60 µm particles suspended in the bioreactor and illuminated using a thin laser sheet. The effects of scaffold fiber lay down pattern and Reynolds number were obtained and correspondingly compared to results obtained from a computational fluid dynamics (CFD) software package. The objectives of this article are twofold: to investigate the hypothesis that there may be an insufficient exchange of medium within the interior of the scaffold when using our non-perfusion bioreactor, and second, to compare the flows within and around scaffolds of 45° and 90° fiber lay down patterns. Scaffold porosity was also found to influence flow patterns. It was therefore shown that fluidic transport could be achieved within scaffolds with our bioreactor design, being a non-perfusion vessel. Fluid velocities were generally same of the same or one order lower in magnitude as compared to the inlet flow velocity. Additionally, the 90° fiber lay down pattern scaffold was found to allow for slightly higher fluid velocities within, as compared to the 45° fiber lay down pattern scaffold. This was due to the architecture and pore arrangement of the 90° fiber lay down pattern scaffold, which allows for fluid to flow directly through (channel-like flow).
Resumo:
Under certain circumstances, an industrial hopper which operates under the "funnel-flow" regime can be converted to the "mass-flow" regime with the addition of a flow-corrective insert. This paper is concerned with calculating granular flow patterns near the outlet of hoppers that incorporate a particular type of insert, the cone-in-cone insert. The flow is considered to be quasi-static, and governed by the Coulomb-Mohr yield condition together with the non-dilatant double-shearing theory. In two dimensions, the hoppers are wedge-shaped, and as such the formulation for the wedge-in-wedge hopper also includes the case of asymmetrical hoppers. A perturbation approach, valid for high angles of internal friction, is used for both two-dimensional and axially symmetric flows, with analytic results possible for both leading order and correction terms. This perturbation scheme is compared with numerical solutions to the governing equations, and is shown to work very well for angles of internal friction in excess of 45 degree.
Resumo:
Traditional workflow systems focus on providing support for the control-flow perspective of a business process, with other aspects such as data management and work distribution receiving markedly less attention. A guide to desirable workflow characteristics is provided by the well-known workflow patterns which are derived from a comprehensive survey of contemporary tools and modelling formalisms. In this paper we describe the approach taken to designing the newYAWL workflow system, an offering that aims to provide comprehensive support for the control-flow, data and resource perspectives based on the workflow patterns. The semantics of the newYAWL workflow language are based on Coloured Petri Nets thus facilitating the direct enactment and analysis of processes described in terms of newYAWL language constructs. As part of this discussion, we explain how the operational semantics for each of the language elements are embodied in the newYAWL system and indicate the facilities required to support them in an operational environment. We also review the experiences associated with developing a complete operational design for an offering of this scale using formal techniques.
Resumo:
Crashes on motorway contribute to a significant proportion (40-50%) of non-recurrent motorway congestions. Hence reduce crashes will help address congestion issues (Meyer, 2008). Crash likelihood estimation studies commonly focus on traffic conditions in a Short time window around the time of crash while longer-term pre-crash traffic flow trends are neglected. In this paper we will show, through data mining techniques, that a relationship between pre-crash traffic flow patterns and crash occurrence on motorways exists, and that this knowledge has the potential to improve the accuracy of existing models and opens the path for new development approaches. The data for the analysis was extracted from records collected between 2007 and 2009 on the Shibuya and Shinjuku lines of the Tokyo Metropolitan Expressway in Japan. The dataset includes a total of 824 rear-end and sideswipe crashes that have been matched with traffic flow data of one hour prior to the crash using an incident detection algorithm. Traffic flow trends (traffic speed/occupancy time series) revealed that crashes could be clustered with regards of the dominant traffic flow pattern prior to the crash. Using the k-means clustering method allowed the crashes to be clustered based on their flow trends rather than their distance. Four major trends have been found in the clustering results. Based on these findings, crash likelihood estimation algorithms can be fine-tuned based on the monitored traffic flow conditions with a sliding window of 60 minutes to increase accuracy of the results and minimize false alarms.
Resumo:
Crashes that occur on motorways contribute to a significant proportion (40-50%) of non-recurrent motorway congestions. Hence, reducing the frequency of crashes assists in addressing congestion issues (Meyer, 2008). Crash likelihood estimation studies commonly focus on traffic conditions in a short time window around the time of a crash while longer-term pre-crash traffic flow trends are neglected. In this paper we will show, through data mining techniques that a relationship between pre-crash traffic flow patterns and crash occurrence on motorways exists. We will compare them with normal traffic trends and show this knowledge has the potential to improve the accuracy of existing models and opens the path for new development approaches. The data for the analysis was extracted from records collected between 2007 and 2009 on the Shibuya and Shinjuku lines of the Tokyo Metropolitan Expressway in Japan. The dataset includes a total of 824 rear-end and sideswipe crashes that have been matched with crashes corresponding to traffic flow data using an incident detection algorithm. Traffic trends (traffic speed time series) revealed that crashes can be clustered with regards to the dominant traffic patterns prior to the crash. Using the K-Means clustering method with Euclidean distance function allowed the crashes to be clustered. Then, normal situation data was extracted based on the time distribution of crashes and were clustered to compare with the “high risk” clusters. Five major trends have been found in the clustering results for both high risk and normal conditions. The study discovered traffic regimes had differences in the speed trends. Based on these findings, crash likelihood estimation models can be fine-tuned based on the monitored traffic conditions with a sliding window of 30 minutes to increase accuracy of the results and minimize false alarms.
Resumo:
Crashes that occur on motorways contribute to a significant proportion (40-50%) of non-recurrent motorway congestion. Hence, reducing the frequency of crashes assist in addressing congestion issues (Meyer, 2008). Analysing traffic conditions and discovering risky traffic trends and patterns are essential basics in crash likelihood estimations studies and still require more attention and investigation. In this paper we will show, through data mining techniques, that there is a relationship between pre-crash traffic flow patterns and crash occurrence on motorways, compare them with normal traffic trends, and that this knowledge has the potentiality to improve the accuracy of existing crash likelihood estimation models, and opens the path for new development approaches. The data for the analysis was extracted from records collected between 2007 and 2009 on the Shibuya and Shinjuku lines of the Tokyo Metropolitan Expressway in Japan. The dataset includes a total of 824 rear-end and sideswipe crashes that have been matched with crashes corresponding traffic flow data using an incident detection algorithm. Traffic trends (traffic speed time series) revealed that crashes can be clustered with regards to the dominant traffic patterns prior to the crash occurrence. K-Means clustering algorithm applied to determine dominant pre-crash traffic patterns. In the first phase of this research, traffic regimes identified by analysing crashes and normal traffic situations using half an hour speed in upstream locations of crashes. Then, the second phase investigated the different combination of speed risk indicators to distinguish crashes from normal traffic situations more precisely. Five major trends have been found in the first phase of this paper for both high risk and normal conditions. The study discovered traffic regimes had differences in the speed trends. Moreover, the second phase explains that spatiotemporal difference of speed is a better risk indicator among different combinations of speed related risk indicators. Based on these findings, crash likelihood estimation models can be fine-tuned to increase accuracy of estimations and minimize false alarms.
Resumo:
It is widely acknowledged that effective asset management requires an interdisciplinary approach, in which synergies should exist between traditional disciplines such as: accounting, engineering, finance, humanities, logistics, and information systems technologies. Asset management is also an important, yet complex business practice. Business process modelling is proposed as an approach to manage the complexity of asset management through the modelling of asset management processes. A sound foundation for the systematic application and analysis of business process modelling in asset management is, however, yet to be developed. Fundamentally, a business process consists of activities (termed functions), events/states, and control flow logic. As both events/states and control flow logic are somewhat dependent on the functions themselves, it is a logical step to first identify the functions within a process. This research addresses the current gap in knowledge by developing a method to identify functions common to various industry types (termed core functions). This lays the foundation to extract such functions, so as to identify both commonalities and variation points in asset management processes. This method describes the use of a manual text mining and a taxonomy approach. An example is presented.
Resumo:
Flow patterns and aerodynamic characteristics behind three side-by-side square cylinders has been found depending upon the unequal gap spacing (g1 = s1/d and g2 = s2/d) between the three cylinders and the Reynolds number (Re) using the Lattice Boltzmann method. The effect of Reynolds numbers on the flow behind three cylinders are numerically studied for 75 ≤ Re ≤ 175 and chosen unequal gap spacings such as (g1, g2) = (1.5, 1), (3, 4) and (7, 6). We also investigate the effect of g2 while keeping g1 fixed for Re = 150. It is found that a Reynolds number have a strong effect on the flow at small unequal gap spacing (g1, g2) = (1.5, 1.0). It is also found that the secondary cylinder interaction frequency significantly contributes for unequal gap spacing for all chosen Reynolds numbers. It is observed that at intermediate unequal gap spacing (g1, g2) = (3, 4) the primary vortex shedding frequency plays a major role and the effect of secondary cylinder interaction frequencies almost disappear. Some vortices merge near the exit and as a result small modulation found in drag and lift coefficients. This means that with the increase in the Reynolds numbers and unequal gap spacing shows weakens wakes interaction between the cylinders. At large unequal gap spacing (g1, g2) = (7, 6) the flow is fully periodic and no small modulation found in drag and lift coefficients signals. It is found that the jet flows for unequal gap spacing strongly influenced the wake interaction by varying the Reynolds number. These unequal gap spacing separate wake patterns for different Reynolds numbers: flip-flopping, in-phase and anti-phase modulation synchronized, in-phase and anti-phase synchronized. It is also observed that in case of equal gap spacing between the cylinders the effect of gap spacing is stronger than the Reynolds number. On the other hand, in case of unequal gap spacing between the cylinders the wake patterns strongly depends on both unequal gap spacing and Reynolds number. The vorticity contour visualization, time history analysis of drag and lift coefficients, power spectrum analysis of lift coefficient and force statistics are systematically discussed for all chosen unequal gap spacings and Reynolds numbers to fully understand this valuable and practical problem.
Resumo:
Solid–interstitial fluid interaction, which depends on tissue permeability, is significant to the strain-rate-dependent mechanical behavior of humeral head (shoulder) cartilage. Due to anatomical and biomechanical similarities to that of the human shoulder, kangaroos present a suitable animal model. Therefore, indentation experiments were conducted on kangaroo shoulder cartilage tissues from low (10−4/s) to moderately high (10−2/s) strain-rates. A porohyperelastic model was developed based on the experimental characterization; and a permeability function that takes into account the effect of strain-rate on permeability (strain-rate-dependent permeability) was introduced into the model to investigate the effect of rate-dependent fluid flow on tissue response. The prediction of the model with the strain-rate-dependent permeability was compared with those of the models using constant permeability and strain-dependent permeability. Compared to the model with constant permeability, the models with strain-dependent and strain-rate-dependent permeability were able to better capture the experimental variation at all strain-rates (p<0.05). Significant differences were not identified between models with strain-dependent and strain-rate-dependent permeability at strain-rate of 5×10−3/s (p=0.179). However, at strain-rate of 10−2/s, the model with strain-rate-dependent permeability was significantly better at capturing the experimental results (p<0.005). The findings thus revealed the significance of rate-dependent fluid flow on tissue behavior at large strain-rates, which provides insights into the mechanical deformation mechanisms of cartilage tissues.
Resumo:
Data-flow analysis is an integral part of any aggressive optimizing compiler. We propose a framework for improving the precision of data-flow analysis in the presence of complex control-flow. W initially perform data-flow analysis to determine those control-flow merges which cause the loss in data-flow analysis precision. The control-flow graph of the program is then restructured such that performing data-flow analysis on the resulting restructured graph gives more precise results. The proposed framework is both simple, involving the familiar notion of product automata, and also general, since it is applicable to any forward data-flow analysis. Apart from proving that our restructuring process is correct, we also show that restructuring is effective in that it necessarily leads to more optimization opportunities. Furthermore, the framework handles the trade-off between the increase in data-flow precision and the code size increase inherent in the restructuring. We show that determining an optimal restructuring is NP-hard, and propose and evaluate a greedy strategy. The framework has been implemented in the Scale research compiler, and instantiated for the specific problem of Constant Propagation. On the SPECINT 2000 benchmark suite we observe an average speedup of 4% in the running times over Wegman-Zadeck conditional constant propagation algorithm and 2% over a purely path profile guided approach.