937 resultados para Voluntary standard systems
Resumo:
Real-time networked control systems (NCSs) over data networks are being increasingly implemented on a massive scale in industrial applications. Along with this trend, wireless network technologies have been promoted for modern wireless NCSs (WNCSs). However, popular wireless network standards such as IEEE 802.11/15/16 are not designed for real-time communications. Key issues in real-time applications include limited transmission reliability and poor transmission delay performance. Considering the unique features of real-time control systems, this paper develops a conditional retransmission enabled transport protocol (CRETP) to improve the delay performance of the transmission control protocol (TCP) and also the reliability performance of the user datagram protocol (UDP) and its variants. Key features of the CRETP include a connectionless mechanism with acknowledgement (ACK), conditional retransmission and detection of ineffective data packets on the receiver side.
Resumo:
In this paper, the level of lean manufacturing implementation by Saudi manufacturing companies is investigated, the extent of application of lean manufacturing practice is identified and the benefits and barriers of Lean implementation are evaluated. The results reported in this paper are based on data collected from a survey using a standard questionnaire administered to 120 manufacturers in Saudi Arabia. Evidence indicates that large size companies are more likely to implement and gain the advantages of lean manufacturing than small and medium size companies. The most implemented lean manufacturing tools are Computerized Planning Systems, TQM, Maintenance Optimization and CIP. Main barriers against lean manufacturing implementation include the organization culture, lack of management commitment and lack of skilled workers. Results also show that benefits gained from lean manufacturing implementation are significant and are correlated with the level of implementation of lean strategies.
Resumo:
This research explores music in space, as experienced through performing and music-making with interactive systems. It explores how musical parameters may be presented spatially and displayed visually with a view to their exploration by a musician during performance. Spatial arrangements of musical components, especially pitches and harmonies, have been widely studied in the literature, but the current capabilities of interactive systems allow the improvisational exploration of these musical spaces as part of a performance practice. This research focuses on quantised spatial organisation of musical parameters that can be categorised as grid music systems (GMSs), and interactive music systems based on them. The research explores and surveys existing and historical uses of GMSs, and develops and demonstrates the use of a novel grid music system designed for whole body interaction. Grid music systems provide plotting of spatialised input to construct patterned music on a two-dimensional grid layout. GMSs are navigated to construct a sequence of parametric steps, for example a series of pitches, rhythmic values, a chord sequence, or terraced dynamic steps. While they are conceptually simple when only controlling one musical dimension, grid systems may be layered to enable complex and satisfying musical results. These systems have proved a viable, effective, accessible and engaging means of music-making for the general user as well as the musician. GMSs have been widely used in electronic and digital music technologies, where they have generally been applied to small portable devices and software systems such as step sequencers and drum machines. This research shows that by scaling up a grid music system, music-making and musical improvisation are enhanced, gaining several advantages: (1) Full body location becomes the spatial input to the grid. The system becomes a partially immersive one in four related ways: spatially, graphically, sonically and musically. (2) Detection of body location by tracking enables hands-free operation, thereby allowing the playing of a musical instrument in addition to “playing” the grid system. (3) Visual information regarding musical parameters may be enhanced so that the performer may fully engage with existing spatial knowledge of musical materials. The result is that existing spatial knowledge is overlaid on, and combined with, music-space. Music-space is a new concept produced by the research, and is similar to notions of other musical spaces including soundscape, acoustic space, Smalley's “circumspace” and “immersive space” (2007, 48-52), and Lotis's “ambiophony” (2003), but is rather more textural and “alive”—and therefore very conducive to interaction. Music-space is that space occupied by music, set within normal space, which may be perceived by a person located within, or moving around in that space. Music-space has a perceivable “texture” made of tensions and relaxations, and contains spatial patterns of these formed by musical elements such as notes, harmonies, and sounds, changing over time. The music may be performed by live musicians, created electronically, or be prerecorded. Large-scale GMSs have the capability not only to interactively display musical information as music representative space, but to allow music-space to co-exist with it. Moving around the grid, the performer may interact in real time with musical materials in music-space, as they form over squares or move in paths. Additionally he/she may sense the textural matrix of the music-space while being immersed in surround sound covering the grid. The HarmonyGrid is a new computer-based interactive performance system developed during this research that provides a generative music-making system intended to accompany, or play along with, an improvising musician. This large-scale GMS employs full-body motion tracking over a projected grid. Playing with the system creates an enhanced performance employing live interactive music, along with graphical and spatial activity. Although one other experimental system provides certain aspects of immersive music-making, currently only the HarmonyGrid provides an environment to explore and experience music-space in a GMS.
Resumo:
In this editorial letter, we provide the readers of Information Systems with a birds-eye introduction to Process-aware Information Systems (PAIS) – a sub-field of Information Systems that has drawn growing attention in the past two decades, both as an engineering and as a management discipline. Against this backdrop, we briefly discuss how the papers included in this special issue contribute to extending the body of knowledge in this field.
Resumo:
Background and Objective: A number of bone filling materials containing calcium (Ca++) and phosphate (P) ions have been used in the repair of periodontal bone defects; however, the effect that local release of Ca++ and P ions have on biological reactions is not fully understood. In this study, we investigated the effects of various levels of Ca++ and P ions on the proliferation, osteogenic differentiation, and mineralization of human periodontal ligament cells (hPDLCs). Materials and Methods: hPDLCs were obtained using an explant culture method. Defined concentrations and ratios of ionic Ca++ to inorganic P were added to standard culture and osteogenic induction media. The ability of hPDLCs to proliferate in these growth media was assayed using the Cell Counting Kit-8 (CCK-8). Cell apoptosis was evaluated by FITC-Annexin V/PI double staining method. Osteogenic differentiation and mineralization were investigated by morphological observations, alkaline phosphatase (ALP) activity, and Alizarin red S/von Kossa staining. The mRNA expression of osteogenic related markers was analyzed using a reverse transcriptase polymerase chain reaction (RT-PCR). Results: Within the ranges of Ca++ and P ions concentrations tested, we observed that increased concentrations of Ca++ and P ions enhanced cell proliferation and formation of mineralized matrix nodules; whereas ALP activity was reduced. The RT-PCR results showed that elevated concentrations of Ca++ and P ions led to a general increase of Runx2 mRNA expression and decreased ALP mRNA expression, but gave no clear trend on OCN mRNA levels. Conclusion: The concentrations and ratios of Ca++ and P ions could significantly influence proliferation, differentiation, and mineralization of hPDLCs. Within the range of concentrations tested, we found that the combination of 9.0 mM Ca++ ions and 4.5 mM P ions were the optimum concentrations for proliferation, differentiation, and mineralization in hPDLCs.
Resumo:
Determination of the placement and rating of transformers and feeders are the main objective of the basic distribution network planning. The bus voltage and the feeder current are two constraints which should be maintained within their standard range. The distribution network planning is hardened when the planning area is located far from the sources of power generation and the infrastructure. This is mainly as a consequence of the voltage drop, line loss and system reliability. Long distance to supply loads causes a significant amount of voltage drop across the distribution lines. Capacitors and Voltage Regulators (VRs) can be installed to decrease the voltage drop. This long distance also increases the probability of occurrence of a failure. This high probability leads the network reliability to be low. Cross-Connections (CC) and Distributed Generators (DGs) are devices which can be employed for improving system reliability. Another main factor which should be considered in planning of distribution networks (in both rural and urban areas) is load growth. For supporting this factor, transformers and feeders are conventionally upgraded which applies a large cost. Installation of DGs and capacitors in a distribution network can alleviate this issue while the other benefits are gained. In this research, a comprehensive planning is presented for the distribution networks. Since the distribution network is composed of low and medium voltage networks, both are included in this procedure. However, the main focus of this research is on the medium voltage network planning. The main objective is to minimize the investment cost, the line loss, and the reliability indices for a study timeframe and to support load growth. The investment cost is related to the distribution network elements such as the transformers, feeders, capacitors, VRs, CCs, and DGs. The voltage drop and the feeder current as the constraints are maintained within their standard range. In addition to minimizing the reliability and line loss costs, the planned network should support a continual growth of loads, which is an essential concern in planning distribution networks. In this thesis, a novel segmentation-based strategy is proposed for including this factor. Using this strategy, the computation time is significantly reduced compared with the exhaustive search method as the accuracy is still acceptable. In addition to being applicable for considering the load growth, this strategy is appropriate for inclusion of practical load characteristic (dynamic), as demonstrated in this thesis. The allocation and sizing problem has a discrete nature with several local minima. This highlights the importance of selecting a proper optimization method. Modified discrete particle swarm optimization as a heuristic method is introduced in this research to solve this complex planning problem. Discrete nonlinear programming and genetic algorithm as an analytical and a heuristic method respectively are also applied to this problem to evaluate the proposed optimization method.
Resumo:
Metallic materials exposed to oxygen-enriched atmospheres – as commonly used in the medical, aerospace, aviation and numerous chemical processing industries – represent a significant fire hazard which must be addressed during design, maintenance and operation. Hence, accurate knowledge of metallic materials flammability is required. Reduced gravity (i.e. space-based) operations present additional unique concerns, where the absence of gravity must also be taken into account. The flammability of metallic materials has historically been quantified using three standardised test methods developed by NASA, ASTM and ISO. These tests typically involve the forceful (promoted) ignition of a test sample (typically a 3.2 mm diameter cylindrical rod) in pressurised oxygen. A test sample is defined as flammable when it undergoes burning that is independent of the ignition process utilised. In the standardised tests, this is indicated by the propagation of burning further than a defined amount, or „burn criterion.. The burn criterion in use at the onset of this project was arbitrarily selected, and did not accurately reflect the length a sample must burn in order to be burning independent of the ignition event and, in some cases, required complete consumption of the test sample for a metallic material to be considered flammable. It has been demonstrated that a) a metallic material.s propensity to support burning is altered by any increase in test sample temperature greater than ~250-300 oC and b) promoted ignition causes an increase in temperature of the test sample in the region closest to the igniter, a region referred to as the Heat Affected Zone (HAZ). If a test sample continues to burn past the HAZ (where the HAZ is defined as the region of the test sample above the igniter that undergoes an increase in temperature of greater than or equal to 250 oC by the end of the ignition event), it is burning independent of the igniter, and should be considered flammable. The extent of the HAZ, therefore, can be used to justify the selection of the burn criterion. A two dimensional mathematical model was developed in order to predict the extent of the HAZ created in a standard test sample by a typical igniter. The model was validated against previous theoretical and experimental work performed in collaboration with NASA, and then used to predict the extent of the HAZ for different metallic materials in several configurations. The extent of HAZ predicted varied significantly, ranging from ~2-27 mm depending on the test sample thermal properties and test conditions (i.e. pressure). The magnitude of the HAZ was found to increase with increasing thermal diffusivity, and decreasing pressure (due to slower ignition times). Based upon the findings of this work, a new burn criterion requiring 30 mm of the test sample to be consumed (from the top of the ignition promoter) was recommended and validated. This new burn criterion was subsequently included in the latest revision of the ASTM G124 and NASA 6001B international test standards that are used to evaluate metallic material flammability in oxygen. These revisions also have the added benefit of enabling the conduct of reduced gravity metallic material flammability testing in strict accordance with the ASTM G124 standard, allowing measurement and comparison of the relative flammability (i.e. Lowest Burn Pressure (LBP), Highest No-Burn Pressure (HNBP) and average Regression Rate of the Melting Interface(RRMI)) of metallic materials in normal and reduced gravity, as well as determination of the applicability of normal gravity test results to reduced gravity use environments. This is important, as currently most space-based applications will typically use normal gravity information in order to qualify systems and/or components for reduced gravity use. This is shown here to be non-conservative for metallic materials which are more flammable in reduced gravity. The flammability of two metallic materials, Inconel® 718 and 316 stainless steel (both commonly used to manufacture components for oxygen service in both terrestrial and space-based systems) was evaluated in normal and reduced gravity using the new ASTM G124-10 test standard. This allowed direct comparison of the flammability of the two metallic materials in normal gravity and reduced gravity respectively. The results of this work clearly show, for the first time, that metallic materials are more flammable in reduced gravity than in normal gravity when testing is conducted as described in the ASTM G124-10 test standard. This was shown to be the case in terms of both higher regression rates (i.e. faster consumption of the test sample – fuel), and burning at lower pressures in reduced gravity. Specifically, it was found that the LBP for 3.2 mm diameter Inconel® 718 and 316 stainless steel test samples decreased by 50% from 3.45 MPa (500 psia) in normal gravity to 1.72 MPa (250 psia) in reduced gravity for the Inconel® 718, and 25% from 3.45 MPa (500 psia) in normal gravity to 2.76 MPa (400 psia) in reduced gravity for the 316 stainless steel. The average RRMI increased by factors of 2.2 (27.2 mm/s in 2.24 MPa (325 psia) oxygen in reduced gravity compared to 12.8 mm/s in 4.48 MPa (650 psia) oxygen in normal gravity) for the Inconel® 718 and 1.6 (15.0 mm/s in 2.76 MPa (400 psia) oxygen in reduced gravity compared to 9.5 mm/s in 5.17 MPa (750 psia) oxygen in normal gravity) for the 316 stainless steel. Reasons for the increased flammability of metallic materials in reduced gravity compared to normal gravity are discussed, based upon the observations made during reduced gravity testing and previous work. Finally, the implications (for fire safety and engineering applications) of these results are presented and discussed, in particular, examining methods for mitigating the risk of a fire in reduced gravity.
Resumo:
Increasing global competitiveness worldwide has forced manufacturing organizations to produce high-quality products more quickly and at a competitive cost. In order to reach these goals, they need good quality components from suppliers at optimum price and lead time. This actually forced all the companies to adapt different improvement practices such as lean manufacturing, Just in Time (JIT) and effective supply chain management. Applying new improvement techniques and tools cause higher establishment costs and more Information Delay (ID). On the contrary, these new techniques may reduce the risk of stock outs and affect supply chain flexibility to give a better overall performance. But industry people are unable to measure the overall affects of those improvement techniques with a standard evaluation model .So an effective overall supply chain performance evaluation model is essential for suppliers as well as manufacturers to assess their companies under different supply chain strategies. However, literature on lean supply chain performance evaluation is comparatively limited. Moreover, most of the models assumed random values for performance variables. The purpose of this paper is to propose an effective supply chain performance evaluation model using triangular linguistic fuzzy numbers and to recommend optimum ranges for performance variables for lean implementation. The model initially considers all the supply chain performance criteria (input, output and flexibility), converts the values to triangular linguistic fuzzy numbers and evaluates overall supply chain performance under different situations. Results show that with the proposed performance measurement model, improvement area for each variable can be accurately identified.
Resumo:
To sustain an ongoing rapid growth of video information, there is an emerging demand for a sophisticated content-based video indexing system. However, current video indexing solutions are still immature and lack of any standard. This doctoral consists of a research work based on an integrated multi-modal approach for sports video indexing and retrieval. By combining specific features extractable from multiple audio-visual modalities, generic structure and specific events can be detected and classified. During browsing and retrieval, users will benefit from the integration of high-level semantic and some descriptive mid-level features such as whistle and close-up view of player(s).
Resumo:
The decision to represent the USDL abstract syntax as a metamodel, shown as a set of UML diagrams, has two main benefits: the ability to show a well- understood standard graphical representation of the concepts and their relation- ships to one another, and the ability to use object-oriented frameworks such as Eclipse Modeling Framework (EMF) to assist in the automated generation of tool support for USDL service descriptions.
Resumo:
This paper formulates a node-based smoothed conforming point interpolation method (NS-CPIM) for solid mechanics. In the proposed NS-CPIM, the higher order conforming PIM shape functions (CPIM) have been constructed to produce a continuous and piecewise quadratic displacement field over the whole problem domain, whereby the smoothed strain field was obtained through smoothing operation over each smoothing domain associated with domain nodes. The smoothed Galerkin weak form was then developed to create the discretized system equations. Numerical studies have demonstrated the following good properties: NS-CPIM (1) can pass both standard and quadratic patch test; (2) provides an upper bound of strain energy; (3) avoid the volumetric locking; (4) provides the higher accuracy than those in the node-based smoothed schemes of the original PIMs.
Resumo:
The Australian Government is about to release Australia’s first sustainable population policy. Sustainable population growth, among other things, implies sustainable energy demand. Current modelling of future energy demand both in Australia and by agencies such as the International Energy Agency sees population growth as one of the key drivers of energy demand. Simply increasing the demand for energy in response to population policy is sustainable only if there is a radical restructuring of the energy system away from energy sources associated with environmental degradation towards one more reliant on renewable fuels and less reliant on fossil fuels. Energy policy can also address the present nexus between energy consumption per person and population growth through an aggressive energy efficiency policy. The paper considers the link between population policies and energy policies and considers how the overall goal of sustainability can be achieved. The methods applied in this analysis draw on the literature of sustainable development to develop elements of an energy planning framework to support a sustainable population policy. Rather than simply accept that energy demand is a function of population increase moderated by an assumed rate of energy efficiency improvement, the focus is on considering what rate of energy efficiency improvement is necessary to significantly reduce the standard connections between population growth and growth in energy demand and what policies are necessary to achieve this situation. Energy efficiency policies can only moderate unsustainable aspects of energy demand and other policies are essential to restructure existing energy systems into on-going sustainable forms. Policies to achieve these objectives are considered. This analysis shows that energy policy, population policy and sustainable development policies are closely integrated. Present policy and planning agencies do not reflect this integration and energy and population policies in Australia have largely developed independently and whether the outcome is sustainable is largely a matter of chance. A genuinely sustainable population policy recognises the inter-dependence between population and energy policies and it is essential that this is reflected in integrated policy and planning agencies
Resumo:
The Australian Government is about to release Australia’s first sustainable population policy. Sustainable population growth, among other things, implies sustainable energy demand. Current modelling of future energy demand both in Australia and by agencies such as the International Energy Agency sees population growth as one of the key drivers of energy demand. Simply increasing the demand for energy in response to population policy is sustainable only if there is a radical restructuring of the energy system away from energy sources associated with environmental degradation towards one more reliant on renewable fuels and less reliant on fossil fuels. Energy policy can also address the present nexus between energy consumption per person and population growth through an aggressive energy efficiency policy. The paper considers the link between population policies and energy policies and considers how the overall goal of sustainability can be achieved. The methods applied in this analysis draw on the literature of sustainable development to develop elements of an energy planning framework to support a sustainable population policy. Rather than simply accept that energy demand is a function of population increase moderated by an assumed rate of energy efficiency improvement, the focus is on considering what rate of energy efficiency improvement is necessary to significantly reduce the standard connections between population growth and growth in energy demand and what policies are necessary to achieve this situation. Energy efficiency policies can only moderate unsustainable aspects of energy demand and other policies are essential to restructure existing energy systems into on-going sustainable forms. Policies to achieve these objectives are considered. This analysis shows that energy policy, population policy and sustainable development policies are closely integrated. Present policy and planning agencies do not reflect this integration and energy and population policies in Australia have largely developed independently and whether the outcome is sustainable is largely a matter of chance. A genuinely sustainable population policy recognises the inter-dependence between population and energy policies and it is essential that this is reflected in integrated policy and planning agencies
Resumo:
The final shape of the "Internet of Things" ubiquitous computing promises relies on a cybernetic system of inputs (in the form of sensory information), computation or decision making (based on the prefiguration of rules, contexts, and user-generated or defined metadata), and outputs (associated action from ubiquitous computing devices). My interest in this paper lies in the computational intelligences that suture these positions together, and how positioning these intelligences as autonomous agents extends the dialogue between human-users and ubiquitous computing technology. Drawing specifically on the scenarios surrounding the employment of ubiquitous computing within aged care, I argue that agency is something that cannot be traded without serious consideration of the associated ethics.
Resumo:
This article presents a two-stage analytical framework that integrates ecological crop (animal) growth and economic frontier production models to analyse the productive efficiency of crop (animal) production systems. The ecological crop (animal) growth model estimates "potential" output levels given the genetic characteristics of crops (animals) and the physical conditions of locations where the crops (animals) are grown (reared). The economic frontier production model estimates "best practice" production levels, taking into account economic, institutional and social factors that cause farm and spatial heterogeneity. In the first stage, both ecological crop growth and economic frontier production models are estimated to calculate three measures of productive efficiency: (1) technical efficiency, as the ratio of actual to "best practice" output levels; (2) agronomic efficiency, as the ratio of actual to "potential" output levels; and (3) agro-economic efficiency, as the ratio of "best practice" to "potential" output levels. Also in the first stage, the economic frontier production model identifies factors that determine technical efficiency. In the second stage, agro-economic efficiency is analysed econometrically in relation to economic, institutional and social factors that cause farm and spatial heterogeneity. The proposed framework has several important advantages in comparison with existing proposals. Firstly, it allows the systematic incorporation of all physical, economic, institutional and social factors that cause farm and spatial heterogeneity in analysing the productive performance of crop and animal production systems. Secondly, the location-specific physical factors are not modelled symmetrically as other economic inputs of production. Thirdly, climate change and technological advancements in crop and animal sciences can be modelled in a "forward-looking" manner. Fourthly, knowledge in agronomy and data from experimental studies can be utilised for socio-economic policy analysis. The proposed framework can be easily applied in empirical studies due to the current availability of ecological crop (animal) growth models, farm or secondary data, and econometric software packages. The article highlights several directions of empirical studies that researchers may pursue in the future.