892 resultados para Large modeling projects
Resumo:
Automatic signature verification is a well-established and an active area of research with numerous applications such as bank check verification, ATM access, etc. This paper proposes a novel approach to the problem of automatic off-line signature verification and forgery detection. The proposed approach is based on fuzzy modeling that employs the Takagi-Sugeno (TS) model. Signature verification and forgery detection are carried out using angle features extracted from box approach. Each feature corresponds to a fuzzy set. The features are fuzzified by an exponential membership function involved in the TS model, which is modified to include structural parameters. The structural parameters are devised to take account of possible variations due to handwriting styles and to reflect moods. The membership functions constitute weights in the TS model. The optimization of the output of the TS model with respect to the structural parameters yields the solution for the parameters. We have also derived two TS models by considering a rule for each input feature in the first formulation (Multiple rules) and by considering a single rule for all input features in the second formulation. In this work, we have found that TS model with multiple rules is better than TS model with single rule for detecting three types of forgeries; random, skilled and unskilled from a large database of sample signatures in addition to verifying genuine signatures. We have also devised three approaches, viz., an innovative approach and two intuitive approaches using the TS model with multiple rules for improved performance. (C) 2004 Pattern Recognition Society. Published by Elsevier Ltd. All rights reserved.
Resumo:
A large number of mineral processing equipment employs the basic principles of gravity concentration in a flowing fluid of a few millimetres thick in small open channels where the particles are distributed along the flow height based on their physical properties and the fluid flow characteristics. Fluid flow behaviour and slurry transportation characteristics in open channels have been the research topic for many years in many engineering disciplines. However, the open channels used in the mineral processing industries are different in terms of the size of the channel and the flow velocity used. Understanding of water split behaviour is, therefore, essential in modeling flowing film concentrators. In this paper, an attempt has been made to model the water split behaviour in an inclined open rectangular channel, resembling the actual size and the flow velocity used by the mineral processing industries, based on the Prandtl's mixing length approach. (c) 2006 Elsevier B.V. All rights reserved.
Resumo:
Brugada syndrome (BS) is a genetic disease identified by an abnormal electrocardiogram ( ECG) ( mainly abnormal ECGs associated with right bundle branch block and ST-elevation in right precordial leads). BS can lead to increased risk of sudden cardiac death. Experimental studies on human ventricular myocardium with BS have been limited due to difficulties in obtaining data. Thus, the use of computer simulation is an important alternative. Most previous BS simulations were based on animal heart cell models. However, due to species differences, the use of human heart cell models, especially a model with three-dimensional whole-heart anatomical structure, is needed. In this study, we developed a model of the human ventricular action potential (AP) based on refining the ten Tusscher et al (2004 Am. J. Physiol. Heart Circ. Physiol. 286 H1573 - 89) model to incorporate newly available experimental data of some major ionic currents of human ventricular myocytes. These modified channels include the L-type calcium current (ICaL), fast sodium current (I-Na), transient outward potassium current (I-to), rapidly and slowly delayed rectifier potassium currents (I-Kr and I-Ks) and inward rectifier potassium current (I-Ki). Transmural heterogeneity of APs for epicardial, endocardial and mid-myocardial (M) cells was simulated by varying the maximum conductance of IKs and Ito. The modified AP models were then used to simulate the effects of BS on cellular AP and body surface potentials using a three-dimensional dynamic heart - torso model. Our main findings are as follows. (1) BS has little effect on the AP of endocardial or mid-myocardial cells, but has a large impact on the AP of epicardial cells. (2) A likely region of BS with abnormal cell AP is near the right ventricular outflow track, and the resulting ST-segment elevation is located in the median precordium area. These simulation results are consistent with experimental findings reported in the literature. The model can reproduce a variety of electrophysiological behaviors and provides a good basis for understanding the genesis of abnormal ECG under the condition of BS disease.
Resumo:
The ability to grow microscopic spherical birefringent crystals of vaterite, a calcium carbonate mineral, has allowed the development of an optical microrheometer based on optical tweezers. However, since these crystals are birefringent, and worse, are expected to have non-uniform birefringence, computational modeling of the microrheometer is a highly challenging task. Modeling the microrheometer - and optical tweezers in general - typically requires large numbers of repeated calculations for the same trapped particle. This places strong demands on the efficiency of computational methods used. While our usual method of choice for computational modelling of optical tweezers - the T-matrix method - meets this requirement of efficiency, it is restricted to homogeneous isotropic particles. General methods that can model complex structures such as the vaterite particles, such as finite-difference time-domain (FDTD) or finite-difference frequency-domain (FDFD) methods, are inefficient. Therefore, we have developed a hybrid FDFD/T-matrix method that combines the generality of volume-discretisation methods such as FDFD with the efficiency of the T-matrix method. We have used this hybrid method to calculate optical forces and torques on model vaterite spheres in optical traps. We present and compare the results of computational modelling and experimental measurements.
Resumo:
The developments of models in Earth Sciences, e.g. for earthquake prediction and for the simulation of mantel convection, are fare from being finalized. Therefore there is a need for a modelling environment that allows scientist to implement and test new models in an easy but flexible way. After been verified, the models should be easy to apply within its scope, typically by setting input parameters through a GUI or web services. It should be possible to link certain parameters to external data sources, such as databases and other simulation codes. Moreover, as typically large-scale meshes have to be used to achieve appropriate resolutions, the computational efficiency of the underlying numerical methods is important. Conceptional this leads to a software system with three major layers: the application layer, the mathematical layer, and the numerical algorithm layer. The latter is implemented as a C/C++ library to solve a basic, computational intensive linear problem, such as a linear partial differential equation. The mathematical layer allows the model developer to define his model and to implement high level solution algorithms (e.g. Newton-Raphson scheme, Crank-Nicholson scheme) or choose these algorithms form an algorithm library. The kernels of the model are generic, typically linear, solvers provided through the numerical algorithm layer. Finally, to provide an easy-to-use application environment, a web interface is (semi-automatically) built to edit the XML input file for the modelling code. In the talk, we will discuss the advantages and disadvantages of this concept in more details. We will also present the modelling environment escript which is a prototype implementation toward such a software system in Python (see www.python.org). Key components of escript are the Data class and the PDE class. Objects of the Data class allow generating, holding, accessing, and manipulating data, in such a way that the actual, in the particular context best, representation is transparent to the user. They are also the key to establish connections with external data sources. PDE class objects are describing (linear) partial differential equation objects to be solved by a numerical library. The current implementation of escript has been linked to the finite element code Finley to solve general linear partial differential equations. We will give a few simple examples which will illustrate the usage escript. Moreover, we show the usage of escript together with Finley for the modelling of interacting fault systems and for the simulation of mantel convection.
Resumo:
Rural electrification projects and programmes in many countries have suffered from design, planning, implementation and operational flaws as a result of ineffective project planning and lack of systematic project risk analysis. This paper presents a hierarchical risk-management framework for effectively managing large-scale development projects. The proposed framework first identifies, with the involvement of stakeholders, the risk factors for a rural electrification programme at three different levels (national, state and site). Subsequently it develops a qualitative risk prioritising scheme through probability and severity mapping and provides mitigating measures for most vulnerable risks. The study concludes that the hierarchical risk-management approach provides an effective framework for managing large-scale rural electrification programmes. © IAIA 2007.
Resumo:
Conventional project management techniques are not always sufficient to ensure time, cost and quality achievement of large-scale construction projects due to complexity in planning, design and implementation processes. The main reasons for project non-achievement are changes in scope and design, changes in government policies and regulations, unforeseen inflation, underestimation and improper estimation. Projects that are exposed to such an uncertain environment can be effectively managed with the application of risk management throughout the project's life cycle. However, the effectiveness of risk management depends on the technique through which the effects of risk factors are analysed/quantified. This study proposes the Analytic Hierarchy Process (AHP), a multiple attribute decision making technique, as a tool for risk analysis because it can handle subjective as well as objective factors in a decision model that are conflicting in nature. This provides a decision support system (DSS) to project management for making the right decision at the right time for ensuring project success in line with organisation policy, project objectives and a competitive business environment. The whole methodology is explained through a case application of a cross-country petroleum pipeline project in India and its effectiveness in project management is demonstrated.
Resumo:
This study highlights the variables associated with the implementation of renewable energy (RE) projects for sustainable development in India, by using an interpretive structural modeling (ISM) - based approach to model variables' interactions, which impact RE adoption. These variables have been categorized under enablers that help to enhance implementation of RE projects for sustainable development. A major finding is that public awareness regarding RE for sustainable development is a very significant enabler. For successful implementation of RE projects, it has been observed that top management should focus on improving highdriving power enablers (leadership, strategic planning, public awareness, management commitment, availability of finance, government support, and support from interest groups).
Resumo:
The objective of this work was to design, construct and commission a new ablative pyrolysis reactor and a high efficiency product collection system. The reactor was to have a nominal throughput of 10 kg/11r of dry biomass and be inherently scalable up to an industrial scale application of 10 tones/hr. The whole process consists of a bladed ablative pyrolysis reactor, two high efficiency cyclones for char removal and a disk and doughnut quench column combined with a wet walled electrostatic precipitator, which is directly mounted on top, for liquids collection. In order to aid design and scale-up calculations, detailed mathematical modelling was undertaken of the reaction system enabling sizes, efficiencies and operating conditions to be determined. Specifically, a modular approach was taken due to the iterative nature of some of the design methodologies, with the output from one module being the input to the next. Separate modules were developed for the determination of the biomass ablation rate, specification of the reactor capacity, cyclone design, quench column design and electrostatic precipitator design. These models enabled a rigorous design protocol to be developed capable of specifying the required reactor and product collection system size for specified biomass throughputs, operating conditions and collection efficiencies. The reactor proved capable of generating an ablation rate of 0.63 mm/s for pine wood at a temperature of 525 'DC with a relative velocity between the heated surface and reacting biomass particle of 12.1 m/s. The reactor achieved a maximum throughput of 2.3 kg/hr, which was the maximum the biomass feeder could supply. The reactor is capable of being operated at a far higher throughput but this would require a new feeder and drive motor to be purchased. Modelling showed that the reactor is capable of achieving a reactor throughput of approximately 30 kg/hr. This is an area that should be considered for the future as the reactor is currently operating well below its theoretical maximum. Calculations show that the current product collection system could operate efficiently up to a maximum feed rate of 10 kg/Fir, provided the inert gas supply was adjusted accordingly to keep the vapour residence time in the electrostatic precipitator above one second. Operation above 10 kg/hr would require some modifications to the product collection system. Eight experimental runs were documented and considered successful, more were attempted but due to equipment failure had to be abandoned. This does not detract from the fact that the reactor and product collection system design was extremely efficient. The maximum total liquid yield was 64.9 % liquid yields on a dry wood fed basis. It is considered that the liquid yield would have been higher had there been sufficient development time to overcome certain operational difficulties and if longer operating runs had been attempted to offset product losses occurring due to the difficulties in collecting all available product from a large scale collection unit. The liquids collection system was highly efficient and modeling determined a liquid collection efficiency of above 99% on a mass basis. This was validated due to the fact that a dry ice/acetone condenser and a cotton wool filter downstream of the collection unit enabled mass measurements of the amount of condensable product exiting the product collection unit. This showed that the collection efficiency was in excess of 99% on a mass basis.
Resumo:
An increasing number of organisational researchers have turned to social capital theory in an attempt to better understand the impetus for knowledge sharing at the individual and organisational level. This thesis extends that research by investigating the impact of social capital on knowledge sharing at the group-level in the organisational project context. The objective of the thesis is to investigate the importance of social capital in fostering tacit knowledge sharing among the team members of a project. The analytical focus is on the Nahapiet and Ghoshal framework of social capital but also includes elements of other scholars' work. In brief, social capital is defined as an asset that is embedded in the network of relationships possessed by an individual or social unit. It is argued that the main dimensions of social capital that are of relevance to knowledge sharing are structural, cognitive, and relational because these, among other things, foster the exchange and combination of knowledge and resources among the team members. Empirically, the study is based on the grounded theory method. Data were collected from five projects in large, medium, and small ICT companies in Malaysia. Underpinned by the constant comparative method, data were derived from 55 interviews, and observations. The data were analysed using open, axial, and selective coding. The analysis also involved counting frequency occurrence from the coding generated by grounded theory to find the important items and categories under social capital dimensions and knowledge sharing, and for further explaining sub-groups within the data. The analysis shows that the most important dimension for tacit knowledge sharing is structural capital. Most importantly, the findings also suggest that structural capital is a prerequisite of cognitive capital and relational capital at the group-level in an organisational project. It also found that in a project context, relational capital is hard to realise because it requires time and frequent interactions among the team members. The findings from quantitative analysis show that frequent meetings and interactions, relationship, positions, shared visions, shared objectives, and collaboration are among the factors that foster the sharing of tacit knowledge among the team members. In conclusion, the present study adds to the existing literature on social capital in two main ways. Firstly, it distinguishes the dimensions of social capital and identifies that structural capital is the most important dimension in social capital and it is a prerequisite of cognitive and relational capital in a project context. Secondly, it identifies the causal sequence in the dimension of social capital suggesting avenues for further theoretical and empirical work in this emerging area of inquiry.
Resumo:
Many planning and control tools, especially network analysis, have been developed in the last four decades. The majority of them were created in military organization to solve the problem of planning and controlling research and development projects. The original version of the network model (i.e. C.P.M/PERT) was transplanted to the construction industry without the consideration of the special nature and environment of construction projects. It suited the purpose of setting up targets and defining objectives, but it failed in satisfying the requirement of detailed planning and control at the site level. Several analytical and heuristic rules based methods were designed and combined with the structure of C.P.M. to eliminate its deficiencies. None of them provides a complete solution to the problem of resource, time and cost control. VERT was designed to deal with new ventures. It is suitable for project evaluation at the development stage. CYCLONE, on the other hand, is concerned with the design and micro-analysis of the production process. This work introduces an extensive critical review of the available planning techniques and addresses the problem of planning for site operation and control. Based on the outline of the nature of site control, this research developed a simulation based network model which combines part of the logics of both VERT and CYCLONE. Several new nodes were designed to model the availability and flow of resources, the overhead and operating cost and special nodes for evaluating time and cost. A large software package is written to handle the input, the simulation process and the output of the model. This package is designed to be used on any microcomputer using MS-DOS operating system. Data from real life projects were used to demonstrate the capability of the technique. Finally, a set of conclusions are drawn regarding the features and limitations of the proposed model, and recommendations for future work are outlined at the end of this thesis.
Resumo:
WiMAX has been introduced as a competitive alternative for metropolitan broadband wireless access technologies. It is connection oriented and it can provide very high data rates, large service coverage, and flexible quality of services (QoS). Due to the large number of connections and flexible QoS supported by WiMAX, the uplink access in WiMAX networks is very challenging since the medium access control (MAC) protocol must efficiently manage the bandwidth and related channel allocations. In this paper, we propose and investigate a cost-effective WiMAX bandwidth management scheme, named the WiMAX partial sharing scheme (WPSS), in order to provide good QoS while achieving better bandwidth utilization and network throughput. The proposed bandwidth management scheme is compared with a simple but inefficient scheme, named the WiMAX complete sharing scheme (WCPS). A maximum entropy (ME) based analytical model (MEAM) is proposed for the performance evaluation of the two bandwidth management schemes. The reason for using MEAM for the performance evaluation is that MEAM can efficiently model a large-scale system in which the number of stations or connections is generally very high, while the traditional simulation and analytical (e.g., Markov models) approaches cannot perform well due to the high computation complexity. We model the bandwidth management scheme as a queuing network model (QNM) that consists of interacting multiclass queues for different service classes. Closed form expressions for the state and blocking probability distributions are derived for those schemes. Simulation results verify the MEAM numerical results and show that WPSS can significantly improve the network's performance compared to WCPS.
Resumo:
Practitioners assess performance of entities in increasingly large and complicated datasets. If non-parametric models, such as Data Envelopment Analysis, were ever considered as simple push-button technologies, this is impossible when many variables are available or when data have to be compiled from several sources. This paper introduces by the 'COOPER-framework' a comprehensive model for carrying out non-parametric projects. The framework consists of six interrelated phases: Concepts and objectives, On structuring data, Operational models, Performance comparison model, Evaluation, and Result and deployment. Each of the phases describes some necessary steps a researcher should examine for a well defined and repeatable analysis. The COOPER-framework provides for the novice analyst guidance, structure and advice for a sound non-parametric analysis. The more experienced analyst benefits from a check list such that important issues are not forgotten. In addition, by the use of a standardized framework non-parametric assessments will be more reliable, more repeatable, more manageable, faster and less costly. © 2010 Elsevier B.V. All rights reserved.
Resumo:
Post-disaster housing reconstruction projects face several challenges. Resources and material supplies are often scarce; several and different types of organizations are involved, while projects must be completed as quickly as possible to foster recovery. Within this context, the chapter aims to increase the understanding of relief supply chain design in reconstruction. In addition, the chapter is introducing a community based and beneficiary perspective to relief supply chains by evaluating the implications of local components for supply chain design in reconstruction. This is achieved through the means of secondary data analysis based on the evaluation reports of two major housing reconstruction projects that took place in Europe the last decade. A comparative analysis of the organizational designs of these projects highlights the ways in which users can be involved. The performance of reconstruction supply chains seems to depend to a large extent on the way beneficiaries are integrated in supply chain design impacting positively on the effectiveness of reconstruction supply chains.
Resumo:
Despite considerable and growing interest in the subject of academic researchers and practising managers jointly generating knowledge (which we term ‘co-production’), our searches of management literature revealed few articles based on primary data or multiple cases. Given the increasing commitment to co-production by academics, managers and those funding research, it seems important to strengthen the evidence base about practice and performance in co-production. Literature on collaborative research was reviewed to develop a framework to structure the analysis of this data and relate findings to the limited body of prior research on collaborative research practice and performance. This paper presents empirical data from four completed, large scale co-production projects. Despite major differences between the cases, we find that the key success factors and the indicators of performances are remarkably similar. We demonstrate many, complex influences between factors, between outcomes, and between factors and outcomes, and discuss the features that are distinctive to co-production. Our empirical findings are broadly consonant with prior literature, but go further in trying to understand success factors’ consequences for performance. A second contribution of this paper is the development of a conceptually and methodologically rigorous process for investigating collaborative research, linking process and performance. The paper closes with discussion of the study’s limitations and opportunities for further research.