622 resultados para membrane-forming systems
Resumo:
In this paper a new approach is proposed for interpreting of regional frequencies in multi machine power systems. The method uses generator aggregation and system reduction based on coherent generators in each area. The reduced system structure is able to be identified and a kalman estimator is designed for the reduced system to estimate the inter-area modes using the synchronized phasor measurement data. The proposed method is tested on a six machine, three area test system and the obtained results show the estimation of inter-area oscillations in the system with a high accuracy.
Resumo:
This brief consumer marketing case study was published in a consumer marketing text book.
Resumo:
To analyse mechanotransduction resulting from tensile loading under defined conditions, various devices for in vitro cell stimulation have been developed. This work aimed to determine the strain distribution on the membrane of a commercially available device and its consistency with rising cycle numbers, as well as the amount of strain transferred to adherent cells. The strains and their behaviour within the stimulation device were determined using digital image correlation (DIC). The strain transferred to cells was measured on eGFP-transfected bone marrow-derived cells imaged with a fluorescence microscope. The analysis was performed by determining the coordinates of prominent positions on the cells, calculating vectors between the coordinates and their length changes with increasing applied tensile strain. The stimulation device was found to apply homogeneous (mean of standard deviations approx. 2% of mean strain) and reproducible strains in the central well area. However, on average, only half of the applied strain was transferred to the bone marrow-derived cells. Furthermore, the strain measured within the device increased significantly with an increasing number of cycles while the membrane's Young's modulus decreased, indicating permanent changes in the material during extended use. Thus, strain magnitudes do not match the system readout and results require careful interpretation, especially at high cycle numbers.
Resumo:
Introduction and aims: For a scaffold material to be considered effective and efficient for tissue engineering it must be biocompatible as well as bioinductive. Silk fiber is a natural biocompatible material suitable for scaffold fabrication; however, silk is tissue-conductive and lacks tissue-inductive properties. One proposed method to make the scaffold tissue-inductive is to introduce plasmids or viruses encoding a specific growth factor into the scaffold. In this study, we constructed adenoviruses encoding bone morphogenetic protein-7 (BMP-7) and incorporated these into silk scaffolds. The osteo-inductive and new bone formation properties of these constructs were assessed in vivo in a critical-sized skull defect animal model. Materials and methods: Silk fibroin scaffolds containing adenovirus particles coding BMP-7 were prepared. The release of the adenovirus particles from the scaffolds was quantified by tissue-culture infective dose (TCID50) and the bioactivity of the released viruses was evaluated on human bone marrow mesenchymal stromal cells (BMSCs). To demonstrate the in vivo bone forming ability of the virus-carrying silk fibroin scaffold, the scaffold constructs were implanted into calvarial defects in SCID mice. Results: In vitro studies demonstrated that the virus-carrying silk fibroin scaffold released virus particles over a 3 week period while preserving their bioactivity. In vivo test of the scaffold constructs in critical-sized skull defect areas revealed that silk scaffolds were capable of delivering the adenovirus encoding BMP-7, resulting significantly enhanced new bone formation. Conclusions: Silk scaffolds carrying BMP-7 encoding adenoviruses can effectively transfect cells and enhance both in vitro and in vivo osteogenesis. The findings of this study indicate silk fibroin is a promising biomaterial for gene delivery to repair critical-sized bone defects.
Resumo:
This paper presents a preliminary crash avoidance framework for heavy equipment control systems. Safe equipment operation is a major concern on construction sites since fatal on-site injuries are an industry-wide problem. The proposed framework has potential for effecting active safety for equipment operation. The framework contains algorithms for spatial modeling, object tracking, and path planning. Beyond generating spatial models in fractions of seconds, these algorithms can successfully track objects in an environment and produce a collision-free 3D motion trajectory for equipment.
Resumo:
It could be said that road congestion is one of the most significant problems within any modern metropolitan area. For several decades now, around the globe, congestion in metropolitan areas has been worsening for two main reasons. Firstly, road congestion has significantly increased due to a higher demand for road space because of growth in populations, economic activity and incomes (Hensher & Puckett, 2007). This factor, in conjunction with a significant lack of investment in new road and public transport infrastructure, has seen the road network capacities of cities exceeded by traffic volumes and thus, resulted in increased traffic congestion. This relentless increase in road traffic congestion has resulted in a dramatic increase in costs for both the road users and ultimately the metropolitan areas concerned (Bureau of Transport and Regional Economics, 2007). In response to this issue, several major cities around the world, including London, Stockholm and Singapore, have implemented congestion-charging schemes in order to combat the effects of road congestion. A congestion-charging scheme provides a mechanism for regulating traffic flows into the congested areas of a city, whilst simultaneously generating public revenue that can be used to improve both the public transport and road networks of the region. The aim of this paper was to assess the concept of congestion-charging, whilst reflecting on the experiences of various cities that have already implemented such systems. The findings from this paper have been used to inform the design of a congestion-charging scheme for the city of Brisbane in Australia in a supplementary study (Whitehead, Bunker, & Chung, 2011). The first section of this paper examines the background to road congestion; the theory behind different congestion-charging schemes; and the various technologies involved with the concept. The second section of this paper details the experiences, in relation to implementing a congestion-charging scheme, from the city of Stockholm in Sweden. This research has been crucial in forming a list of recommendations and lessons learnt for the design of a congestion-charging scheme in Australia. It is these recommendations that directly inform the proposed design of the Brisbane Cordon Scheme detailed in Whitehead et al. (2011).
Resumo:
Linking real-time schedulability directly to the Quality of Control (QoC), the ultimate goal of a control system, a hierarchical feedback QoC management framework with the Fixed Priority (FP) and the Earliest-Deadline-First (EDF) policies as plug-ins is proposed in this paper for real-time control systems with multiple control tasks. It uses a task decomposition model for continuous QoC evaluation even in overload conditions, and then employs heuristic rules to adjust the period of each of the control tasks for QoC improvement. If the total requested workload exceeds the desired value, global adaptation of control periods is triggered for workload maintenance. A sufficient stability condition is derived for a class of control systems with delay and period switching of the heuristic rules. Examples are given to demonstrate the proposed approach.
Resumo:
Video surveillance technology, based on Closed Circuit Television (CCTV) cameras, is one of the fastest growing markets in the field of security technologies. However, the existing video surveillance systems are still not at a stage where they can be used for crime prevention. The systems rely heavily on human observers and are therefore limited by factors such as fatigue and monitoring capabilities over long periods of time. To overcome this limitation, it is necessary to have “intelligent” processes which are able to highlight the salient data and filter out normal conditions that do not pose a threat to security. In order to create such intelligent systems, an understanding of human behaviour, specifically, suspicious behaviour is required. One of the challenges in achieving this is that human behaviour can only be understood correctly in the context in which it appears. Although context has been exploited in the general computer vision domain, it has not been widely used in the automatic suspicious behaviour detection domain. So, it is essential that context has to be formulated, stored and used by the system in order to understand human behaviour. Finally, since surveillance systems could be modeled as largescale data stream systems, it is difficult to have a complete knowledge base. In this case, the systems need to not only continuously update their knowledge but also be able to retrieve the extracted information which is related to the given context. To address these issues, a context-based approach for detecting suspicious behaviour is proposed. In this approach, contextual information is exploited in order to make a better detection. The proposed approach utilises a data stream clustering algorithm in order to discover the behaviour classes and their frequency of occurrences from the incoming behaviour instances. Contextual information is then used in addition to the above information to detect suspicious behaviour. The proposed approach is able to detect observed, unobserved and contextual suspicious behaviour. Two case studies using video feeds taken from CAVIAR dataset and Z-block building, Queensland University of Technology are presented in order to test the proposed approach. From these experiments, it is shown that by using information about context, the proposed system is able to make a more accurate detection, especially those behaviours which are only suspicious in some contexts while being normal in the others. Moreover, this information give critical feedback to the system designers to refine the system. Finally, the proposed modified Clustream algorithm enables the system to both continuously update the system’s knowledge and to effectively retrieve the information learned in a given context. The outcomes from this research are: (a) A context-based framework for automatic detecting suspicious behaviour which can be used by an intelligent video surveillance in making decisions; (b) A modified Clustream data stream clustering algorithm which continuously updates the system knowledge and is able to retrieve contextually related information effectively; and (c) An update-describe approach which extends the capability of the existing human local motion features called interest points based features to the data stream environment.
Resumo:
The functional properties of cartilaginous tissues are determined predominantly by the content, distribution, and organization of proteoglycan and collagen in the extracellular matrix. Extracellular matrix accumulates in tissue-engineered cartilage constructs by metabolism and transport of matrix molecules, processes that are modulated by physical and chemical factors. Constructs incubated under free-swelling conditions with freely permeable or highly permeable membranes exhibit symmetric surface regions of soft tissue. The variation in tissue properties with depth from the surfaces suggests the hypothesis that the transport processes mediated by the boundary conditions govern the distribution of proteoglycan in such constructs. A continuum model (DiMicco and Sah in Transport Porus Med 50:57-73, 2003) was extended to test the effects of membrane permeability and perfusion on proteoglycan accumulation in tissue-engineered cartilage. The concentrations of soluble, bound, and degraded proteoglycan were analyzed as functions of time, space, and non-dimensional parameters for several experimental configurations. The results of the model suggest that the boundary condition at the membrane surface and the rate of perfusion, described by non-dimensional parameters, are important determinants of the pattern of proteoglycan accumulation. With perfusion, the proteoglycan profile is skewed, and decreases or increases in magnitude depending on the level of flow-based stimulation. Utilization of a semi-permeable membrane with or without unidirectional flow may lead to tissues with depth-increasing proteoglycan content, resembling native articular cartilage.
Resumo:
The application of variable structure control (VSC) for power systems stabilization is studied in this paper. It is the application, aspects and constraints of VSC which are of particular interest. A variable structure control methodology has been proposed for power systems stabilization. The method is implemented using thyristor controlled series compensators. A three machine power system is stabilized using a switching line control for large disturbances which becomes a sliding control as the disturbance becomes smaller. The results demonstrate the effectiveness of the methodology proposed as an useful tool to suppress the oscillations in power systems.
Resumo:
Different international plant protection organisations advocate different schemes for conducting pest risk assessments. Most of these schemes use structured questionnaire in which experts are asked to score several items using an ordinal scale. The scores are then combined using a range of procedures, such as simple arithmetic mean, weighted averages, multiplication of scores, and cumulative sums. The most useful schemes will correctly identify harmful pests and identify ones that are not. As the quality of a pest risk assessment can depend on the characteristics of the scoring system used by the risk assessors (i.e., on the number of points of the scale and on the method used for combining the component scores), it is important to assess and compare the performance of different scoring systems. In this article, we proposed a new method for assessing scoring systems. Its principle is to simulate virtual data using a stochastic model and, then, to estimate sensitivity and specificity values from these data for different scoring systems. The interest of our approach was illustrated in a case study where several scoring systems were compared. Data for this analysis were generated using a probabilistic model describing the pest introduction process. The generated data were then used to simulate the outcome of scoring systems and to assess the accuracy of the decisions about positive and negative introduction. The results showed that ordinal scales with at most 5 or 6 points were sufficient and that the multiplication-based scoring systems performed better than their sum-based counterparts. The proposed method could be used in the future to assess a great diversity of scoring systems.
Resumo:
In this contribution, a stability analysis for a dynamic voltage restorer (DVR) connected to a weak ac system containing a dynamic load is presented using continuation techniques and bifurcation theory. The system dynamics are explored through the continuation of periodic solutions of the associated dynamic equations. The switching process in the DVR converter is taken into account to trace the stability regions through a suitable mathematical representation of the DVR converter. The stability regions in the Thevenin equivalent plane are computed. In addition, the stability regions in the control gains space, as well as the contour lines for different Floquet multipliers, are computed. Besides, the DVR converter model employed in this contribution avoids the necessity of developing very complicated iterative map approaches as in the conventional bifurcation analysis of converters. The continuation method and the DVR model can take into account dynamics and nonlinear loads and any network topology since the analysis is carried out directly from the state space equations. The bifurcation approach is shown to be both computationally efficient and robust, since it eliminates the need for numerically critical and long-lasting transient simulations.
Resumo:
In today’s information society, electronic tools, such as computer networks for the rapid transfer of data and composite databases for information storage and management, are critical in ensuring effective environmental management. In particular environmental policies and programs for federal, state, and local governments need a large volume of up-to-date information on the quality of water, air, and soil in order to conserve and protect natural resources and to carry out meteorology. In line with this, the utilization of information and communication technologies (ICTs) is crucial to preserve and improve the quality of life. In handling tasks in the field of environmental protection a range of environmental and technical information is often required for a complex and mutual decision making in a multidisciplinary team environment. In this regard e-government provides a foundation of the transformative ICT initiative which can lead to better environmental governance, better services, and increased public participation in environmental decision- making process.
Resumo:
Countless factors affect the inner workings of a city, so in an attempt to gain an understanding of place and making sound decisions, planners need to utilize decision support systems (DSS) or planning support systems (PSS). PSS were originally developed as DSS in academia for experimental purposes, but like many other technologies, they became one of the most innovative technologies in parallel to rapid developments in software engineering as well as developments and advances in networks and hardware. Particularly, in the last decade, the awareness of PSS have been dramatically heightened with the increasing demand for a better, more reliable and furthermore a transparent decision-making process (Klosterman, Siebert, Hoque, Kim, & Parveen, 2003). Urban planning as an act has quite different perspective from the PSS point of view. The unique nature of planning requires that spatial dimension must be considered within the context of PSS. Additionally, the rapid changes in socio-economic structure cannot be easily monitored or controlled without an effective PSS.
Resumo:
This thesis conceptualises Use for IS (Information Systems) success. While Use in this study describes the extent to which an IS is incorporated into the user’s processes or tasks, success of an IS is the measure of the degree to which the person using the system is better off. For IS success, the conceptualisation of Use offers new perspectives on describing and measuring Use. We test the philosophies of the conceptualisation using empirical evidence in an Enterprise Systems (ES) context. Results from the empirical analysis contribute insights to the existing body of knowledge on the role of Use and demonstrate Use as an important factor and measure of IS success. System Use is a central theme in IS research. For instance, Use is regarded as an important dimension of IS success. Despite its recognition, the Use dimension of IS success reportedly suffers from an all too simplistic definition, misconception, poor specification of its complex nature, and an inadequacy of measurement approaches (Bokhari 2005; DeLone and McLean 2003; Zigurs 1993). Given the above, Burton-Jones and Straub (2006) urge scholars to revisit the concept of system Use, consider a stronger theoretical treatment, and submit the construct to further validation in its intended nomological net. On those considerations, this study re-conceptualises Use for IS success. The new conceptualisation adopts a work-process system-centric lens and draws upon the characteristics of modern system types, key user groups and their information needs, and the incorporation of IS in work processes. With these characteristics, the definition of Use and how it may be measured is systematically established. Use is conceptualised as a second-order measurement construct determined by three sub-dimensions: attitude of its users, depth, and amount of Use. The construct is positioned in a modified IS success research model, in an attempt to demonstrate its central role in determining IS success in an ES setting. A two-stage mixed-methods research design—incorporating a sequential explanatory strategy—was adopted to collect empirical data and to test the research model. The first empirical investigation involved an experiment and a survey of ES end users at a leading tertiary education institute in Australia. The second, a qualitative investigation, involved a series of interviews with real-world operational managers in large Indian private-sector companies to canvass their day-to-day experiences with ES. The research strategy adopted has a stronger quantitative leaning. The survey analysis results demonstrate the aptness of Use as an antecedent and a consequence of IS success, and furthermore, as a mediator between the quality of IS and the impacts of IS on individuals. Qualitative data analysis on the other hand, is used to derive a framework for classifying the diversity of ES Use behaviour. The qualitative results establish that workers Use IS in their context to orientate, negotiate, or innovate. The implications are twofold. For research, this study contributes to cumulative IS success knowledge an approach for defining, contextualising, measuring, and validating Use. For practice, research findings not only provide insights for educators when incorporating ES for higher education, but also demonstrate how operational managers incorporate ES into their work practices. Research findings leave the way open for future, larger-scale research into how industry practitioners interact with an ES to complete their work in varied organisational environments.