987 resultados para IT-system
Resumo:
With the advent of large-scale wind farms and their integration into electrical grids, more uncertainties, constraints and objectives must be considered in power system development. It is therefore necessary to introduce risk-control strategies into the planning of transmission systems connected with wind power generators. This paper presents a probability-based multi-objective model equipped with three risk-control strategies. The model is developed to evaluate and enhance the ability of the transmission system to protect against overload risks when wind power is integrated into the power system. The model involves: (i) defining the uncertainties associated with wind power generators with probability measures and calculating the probabilistic power flow with the combined use of cumulants and Gram-Charlier series; (ii) developing three risk-control strategies by specifying the smallest acceptable non-overload probability for each branch and the whole system, and specifying the non-overload margin for all branches in the whole system; (iii) formulating an overload risk index based on the non-overload probability and the non-overload margin defined; and (iv) developing a multi-objective transmission system expansion planning (TSEP) model with the objective functions composed of transmission investment and the overload risk index. The presented work represents a superior risk-control model for TSEP in terms of security, reliability and economy. The transmission expansion planning model with the three risk-control strategies demonstrates its feasibility in the case study using two typical power systems
Resumo:
Broad, early definitions of sustainable development have caused confusion and hesitation among local authorities and planning professionals. This confusion has arisen because loosely defined principles of sustainable development have been employed when setting policies and planning projects, and when gauging the efficiencies of these policies in the light of designated sustainability goals. The question of how this theory-rhetoric-practice gap can be filled is the main focus of this chapter. It examines the triple bottom line approach–one of the sustainability accounting approaches widely employed by governmental organisations–and the applicability of this approach to sustainable urban development. The chapter introduces the ‘Integrated Land Use and Transportation Indexing Model’ that incorporates triple bottom line considerations with environmental impact assessment techniques via a geographic, information systemsbased decision support system. This model helps decision-makers in selecting policy options according to their economic, environmental and social impacts. Its main purpose is to provide valuable knowledge about the spatial dimensions of sustainable development, and to provide fine detail outputs on the possible impacts of urban development proposals on sustainability levels. In order to embrace sustainable urban development policy considerations, the model is sensitive to the relationship between urban form, travel patterns and socio-economic attributes. Finally, the model is useful in picturing the holistic state of urban settings in terms of their sustainability levels, and in assessing the degree of compatibility of selected scenarios with the desired sustainable urban future.
Resumo:
This paper presents a maintenance optimisation method for a multi-state series-parallel system considering economic dependence and state-dependent inspection intervals. The objective function considered in the paper is the average revenue per unit time calculated based on the semi-regenerative theory and the universal generating function (UGF). A new algorithm using the stochastic ordering is also developed in this paper to reduce the search space of maintenance strategies and to enhance the efficiency of optimisation algorithms. A numerical simulation is presented in the study to evaluate the efficiency of the proposed maintenance strategy and optimisation algorithms. The simulation result reveals that maintenance strategies with opportunistic maintenance and state-dependent inspection intervals are more cost-effective when the influence of economic dependence and inspection cost is significant. The study further demonstrates that the optimisation algorithm proposed in this paper has higher computational efficiency than the commonly employed heuristic algorithms.
Resumo:
A baculovirus-insect cell expression system potentially provides the means to produce prophylactic HIV-1 virus-like particle (VLP) vaccines inexpensively and in large quantities. However, the system must be optimized to maximize yields and increase process efficiency. In this study, we optimized the production of two novel, chimeric HIV-1 VLP vaccine candidates (GagRT and GagTN) in insect cells. This was done by monitoring the effects of four specific factors on VLP expression: these were insect cell line, cell density, multiplicity of infection (MOI), and infection time. The use of western blots, Gag p24 ELISA, and four-factorial ANOVA allowed the determination of the most favorable conditions for chimeric VLP production, as well as which factors affected VLP expression most significantly. Both VLP vaccine candidates favored similar optimal conditions, demonstrating higher yields of VLPs when produced in the Trichoplusia ni Pro insect cell line, at a cell density of 1 × 106 cells/mL, and an infection time of 96 h post infection. It was found that cell density and infection time were major influencing factors, but that MOI did not affect VLP expression significantly. This work provides a potentially valuable guideline for HIV-1 protein vaccine optimization, as well as for general optimization of a baculovirus-based expression system to produce complex recombinant proteins. © 2009 American Institute of Chemical Engineers.
Resumo:
There is a growing interest in the use of megavoltage cone-beam computed tomography (MV CBCT) data for radiotherapy treatment planning. To calculate accurate dose distributions, knowledge of the electron density (ED) of the tissues being irradiated is required. In the case of MV CBCT, it is necessary to determine a calibration-relating CT number to ED, utilizing the photon beam produced for MV CBCT. A number of different parameters can affect this calibration. This study was undertaken on the Siemens MV CBCT system, MVision, to evaluate the effect of the following parameters on the reconstructed CT pixel value to ED calibration: the number of monitor units (MUs) used (5, 8, 15 and 60 MUs), the image reconstruction filter (head and neck, and pelvis), reconstruction matrix size (256 by 256 and 512 by 512), and the addition of extra solid water surrounding the ED phantom. A Gammex electron density CT phantom containing EDs from 0.292 to 1.707 was imaged under each of these conditions. The linear relationship between MV CBCT pixel value and ED was demonstrated for all MU settings and over the range of EDs. Changes in MU number did not dramatically alter the MV CBCT ED calibration. The use of different reconstruction filters was found to affect the MV CBCT ED calibration, as was the addition of solid water surrounding the phantom. Dose distributions from treatment plans calculated with simulated image data from a 15 MU head and neck reconstruction filter MV CBCT image and a MV CBCT ED calibration curve from the image data parameters and a 15 MU pelvis reconstruction filter showed small and clinically insignificant differences. Thus, the use of a single MV CBCT ED calibration curve is unlikely to result in any clinical differences. However, to ensure minimal uncertainties in dose reporting, MV CBCT ED calibration measurements could be carried out using parameter-specific calibration measurements.
Resumo:
Mesenchymal stem cells (MSC) are emerging as a leading cellular therapy for a number of diseases. However, for such treatments to become available as a routine therapeutic option, efficient and cost-effective means for industrial manufacture of MSC are required. At present, clinical grade MSC are manufactured through a process of manual cell culture in specialized cGMP facilities. This process is open, extremely labor intensive, costly, and impractical for anything more than a small number of patients. While it has been shown that MSC can be cultivated in stirred bioreactor systems using microcarriers, providing a route to process scale-up, the degree of numerical expansion achieved has generally been limited. Furthermore, little attention has been given to the issue of primary cell isolation from complex tissues such as placenta. In this article we describe the initial development of a closed process for bulk isolation of MSC from human placenta, and subsequent cultivation on microcarriers in scalable single-use bioreactor systems. Based on our initial data, we estimate that a single placenta may be sufficient to produce over 7,000 doses of therapeutic MSC using a large-scale process.
Resumo:
The three-component reaction-diffusion system introduced in [C. P. Schenk et al., Phys. Rev. Lett., 78 (1997), pp. 3781–3784] has become a paradigm model in pattern formation. It exhibits a rich variety of dynamics of fronts, pulses, and spots. The front and pulse interactions range in type from weak, in which the localized structures interact only through their exponentially small tails, to strong interactions, in which they annihilate or collide and in which all components are far from equilibrium in the domains between the localized structures. Intermediate to these two extremes sits the semistrong interaction regime, in which the activator component of the front is near equilibrium in the intervals between adjacent fronts but both inhibitor components are far from equilibrium there, and hence their concentration profiles drive the front evolution. In this paper, we focus on dynamically evolving N-front solutions in the semistrong regime. The primary result is use of a renormalization group method to rigorously derive the system of N coupled ODEs that governs the positions of the fronts. The operators associated with the linearization about the N-front solutions have N small eigenvalues, and the N-front solutions may be decomposed into a component in the space spanned by the associated eigenfunctions and a component projected onto the complement of this space. This decomposition is carried out iteratively at a sequence of times. The former projections yield the ODEs for the front positions, while the latter projections are associated with remainders that we show stay small in a suitable norm during each iteration of the renormalization group method. Our results also help extend the application of the renormalization group method from the weak interaction regime for which it was initially developed to the semistrong interaction regime. The second set of results that we present is a detailed analysis of this system of ODEs, providing a classification of the possible front interactions in the cases of $N=1,2,3,4$, as well as how front solutions interact with the stationary pulse solutions studied earlier in [A. Doelman, P. van Heijster, and T. J. Kaper, J. Dynam. Differential Equations, 21 (2009), pp. 73–115; P. van Heijster, A. Doelman, and T. J. Kaper, Phys. D, 237 (2008), pp. 3335–3368]. Moreover, we present some results on the general case of N-front interactions.
Resumo:
In this article, we analyze the three-component reaction-diffusion system originally developed by Schenk et al. (PRL 78:3781–3784, 1997). The system consists of bistable activator-inhibitor equations with an additional inhibitor that diffuses more rapidly than the standard inhibitor (or recovery variable). It has been used by several authors as a prototype three-component system that generates rich pulse dynamics and interactions, and this richness is the main motivation for the analysis we present. We demonstrate the existence of stationary one-pulse and two-pulse solutions, and travelling one-pulse solutions, on the real line, and we determine the parameter regimes in which they exist. Also, for one-pulse solutions, we analyze various bifurcations, including the saddle-node bifurcation in which they are created, as well as the bifurcation from a stationary to a travelling pulse, which we show can be either subcritical or supercritical. For two-pulse solutions, we show that the third component is essential, since the reduced bistable two-component system does not support them. We also analyze the saddle-node bifurcation in which two-pulse solutions are created. The analytical method used to construct all of these pulse solutions is geometric singular perturbation theory, which allows us to show that these solutions lie in the transverse intersections of invariant manifolds in the phase space of the associated six-dimensional travelling wave system. Finally, as we illustrate with numerical simulations, these solutions form the backbone of the rich pulse dynamics this system exhibits, including pulse replication, pulse annihilation, breathing pulses, and pulse scattering, among others.
Resumo:
In this article, we analyze the stability and the associated bifurcations of several types of pulse solutions in a singularly perturbed three-component reaction-diffusion equation that has its origin as a model for gas discharge dynamics. Due to the richness and complexity of the dynamics generated by this model, it has in recent years become a paradigm model for the study of pulse interactions. A mathematical analysis of pulse interactions is based on detailed information on the existence and stability of isolated pulse solutions. The existence of these isolated pulse solutions is established in previous work. Here, the pulse solutions are studied by an Evans function associated to the linearized stability problem. Evans functions for stability problems in singularly perturbed reaction-diffusion models can be decomposed into a fast and a slow component, and their zeroes can be determined explicitly by the NLEP method. In the context of the present model, we have extended the NLEP method so that it can be applied to multi-pulse and multi-front solutions of singularly perturbed reaction-diffusion equations with more than one slow component. The brunt of this article is devoted to the analysis of the stability characteristics and the bifurcations of the pulse solutions. Our methods enable us to obtain explicit, analytical information on the various types of bifurcations, such as saddle-node bifurcations, Hopf bifurcations in which breathing pulse solutions are created, and bifurcations into travelling pulse solutions, which can be both subcritical and supercritical.
Resumo:
Evaluating the validity of formative variables has presented ongoing challenges for researchers. In this paper we use global criterion measures to compare and critically evaluate two alternative formative measures of System Quality. One model is based on the ISO-9126 software quality standard, and the other is based on a leading information systems research model. We find that despite both models having a strong provenance, many of the items appear to be non-significant in our study. We examine the implications of this by evaluating the quality of the criterion variables we used, and the performance of PLS when evaluating formative models with a large number of items. We find that our respondents had difficulty distinguishing between global criterion variables measuring different aspects of overall System Quality. Also, because formative indicators “compete with one another” in PLS, it may be difficult to develop a set of measures which are all significant for a complex formative construct with a broad scope and a large number of items. Overall, we suggest that there is cautious evidence that both sets of measures are valid and largely equivalent, although questions still remain about the measures, the use of criterion variables, and the use of PLS for this type of model evaluation.
Resumo:
Readily accepted knowledge regarding crash causation is consistently omitted from efforts to model and subsequently understand motor vehicle crash occurrence and their contributing factors. For instance, distracted and impaired driving accounts for a significant proportion of crash occurrence, yet is rarely modeled explicitly. In addition, spatially allocated influences such as local law enforcement efforts, proximity to bars and schools, and roadside chronic distractions (advertising, pedestrians, etc.) play a role in contributing to crash occurrence and yet are routinely absent from crash models. By and large, these well-established omitted effects are simply assumed to contribute to model error, with predominant focus on modeling the engineering and operational effects of transportation facilities (e.g. AADT, number of lanes, speed limits, width of lanes, etc.) The typical analytical approach—with a variety of statistical enhancements—has been to model crashes that occur at system locations as negative binomial (NB) distributed events that arise from a singular, underlying crash generating process. These models and their statistical kin dominate the literature; however, it is argued in this paper that these models fail to capture the underlying complexity of motor vehicle crash causes, and thus thwart deeper insights regarding crash causation and prevention. This paper first describes hypothetical scenarios that collectively illustrate why current models mislead highway safety researchers and engineers. It is argued that current model shortcomings are significant, and will lead to poor decision-making. Exploiting our current state of knowledge of crash causation, crash counts are postulated to arise from three processes: observed network features, unobserved spatial effects, and ‘apparent’ random influences that reflect largely behavioral influences of drivers. It is argued; furthermore, that these three processes in theory can be modeled separately to gain deeper insight into crash causes, and that the model represents a more realistic depiction of reality than the state of practice NB regression. An admittedly imperfect empirical model that mixes three independent crash occurrence processes is shown to outperform the classical NB model. The questioning of current modeling assumptions and implications of the latent mixture model to current practice are the most important contributions of this paper, with an initial but rather vulnerable attempt to model the latent mixtures as a secondary contribution.
Resumo:
The reliable operation of the electrical system at Callide Power Station is of extreme importance to the normal everyday running of the Station. This study applied the principles of reliability to do an analysis on the electrical system at Callide Power Station. It was found that the level of expected outage cost increased exponentially with a declining level of maintenance. Concluding that even in a harsh economic electricity market where CS Energy tries and push their plants to the limit, maintenance must not be neglected. A number of system configurations were found to increase the reliability of the system and reduce the expected outage costs. A number of other advantages were identified as a result of using reliability principles to do this study on the Callide electrical system configuration.
Resumo:
Waste management and minimisation is considered to be an important issue for achieving sustainability in the construction industry. Retrofit projects generate less waste than demolitions and new builds, but they possess unique features and require waste management approaches that are different to traditional new builds. With the increasing demand for more energy efficient and environmentally sustainable office spaces, the office building retrofit market is growing in capital cities around Australia with a high level of refurbishment needed for existing aging properties. Restricted site space and uncertain delivery process in these projects make it a major challenge to manage waste effectively. The labour-intensive nature of retrofit projects creates the need for the involvement of small and medium enterprises (SMEs) as subcontractors in on-site works. SMEs are familiar with on-site waste generation but are not as actively motivated and engaged in waste management activities as the stakeholders in other construction projects in the industry. SMEs’ responsibilities for waste management in office building retrofit projects need to be identified and adapted to the work delivery processes and the waste management system supported by project stakeholders. The existing literature provides an understanding of how to manage construction waste that is already generated and how to increase the waste recovery rate for office building retrofit projects. However, previous research has not developed theories or practical solutions that can guide project stakeholders to understand the specific waste generation process and effectively plan for and manage waste in ongoing project works. No appropriate method has been established for the potential role and capability of SMEs to manage and minimise waste from their subcontracting works. This research probes into the characteristics of office building retrofit project delivery with the aim to develop specific tools to manage waste and incorporate SMEs in this process in an appropriate and effective way. Based on an extensive literature review, the research firstly developed a questionnaire survey to identify the critical factors of on-site waste generation in office building retrofit projects. Semi-structured interviews were then utilised to validate the critical waste factors and establish the interrelationships between the factors. The interviews served another important function of identifying the current problems of waste management in the industry and the performance of SMEs in this area. Interviewees’ opinions on remedies to the problems were also collected. On the foundation of the findings from the questionnaire survey and semi-structured interviews, two waste planning and management strategies were identified for the dismantling phase and fit-out phase of office building retrofit projects, respectively. Two models were then established to organize SMEs’ waste management activities, including a work process-based integrated waste planning model for the dismantling phase and a system dynamics model for the fit-out phase. In order to apply the models in real practice, procedures were developed to guide SMEs’ work flow in on-site waste planning and management. In addition, a collaboration framework was established for SMEs and other project stakeholders for effective waste planning and management. Furthermore, an organisational engagement strategy was developed to improve SME waste management practices. Three case studies were conducted to validate and finalise the research deliverables. This research extends the current literature that mostly covers waste management plans in new build projects, by presenting the knowledge and understanding of addressing waste problems in retrofit projects. It provides practical tools and guidance for industry practitioners to effectively manage the waste generation processes in office building retrofit projects. It can also promote industry-level recognition of the role of SMEs and their performance in on-site waste management.
Resumo:
Due to increased complexity, scale, and functionality of information and telecommunication (IT) infrastructures, every day new exploits and vulnerabilities are discovered. These vulnerabilities are most of the time used by ma¬licious people to penetrate these IT infrastructures for mainly disrupting business or stealing intellectual pro¬perties. Current incidents prove that it is not sufficient anymore to perform manual security tests of the IT infra¬structure based on sporadic security audits. Instead net¬works should be continuously tested against possible attacks. In this paper we present current results and challenges towards realizing automated and scalable solutions to identify possible attack scenarios in an IT in¬frastructure. Namely, we define an extensible frame¬work which uses public vulnerability databases to identify pro¬bable multi-step attacks in an IT infrastructure, and pro¬vide recommendations in the form of patching strategies, topology changes, and configuration updates.
Resumo:
Power system restoration after a large area outage involves many factors, and the procedure is usually very complicated. A decision-making support system could then be developed so as to find the optimal black-start strategy. In order to evaluate candidate black-start strategies, some indices, usually both qualitative and quantitative, are employed. However, it may not be possible to directly synthesize these indices, and different extents of interactions may exist among these indices. In the existing black-start decision-making methods, qualitative and quantitative indices cannot be well synthesized, and the interactions among different indices are not taken into account. The vague set, an extended version of the well-developed fuzzy set, could be employed to deal with decision-making problems with interacting attributes. Given this background, the vague set is first employed in this work to represent the indices for facilitating the comparisons among them. Then, a concept of the vague-valued fuzzy measure is presented, and on that basis a mathematical model for black-start decision-making developed. Compared with the existing methods, the proposed method could deal with the interactions among indices and more reasonably represent the fuzzy information. Finally, an actual power system is served for demonstrating the basic features of the developed model and method.