834 resultados para test-process features
Resumo:
Agents offer a new and exciting way of understanding the world of work. In this paper we describe the development of agent-based simulation models, designed to help to understand the relationship between people management practices and retail performance. We report on the current development of our simulation models which includes new features concerning the evolution of customers over time. To test the features we have conducted a series of experiments dealing with customer pool sizes, standard and noise reduction modes, and the spread of customers’ word of mouth. To validate and evaluate our model, we introduce new performance measure specific to retail operations. We show that by varying different parameters in our model we can simulate a range of customer experiences leading to significant differences in performance measures. Ultimately, we are interested in better understanding the impact of changes in staff behavior due to changes in store management practices. Our multi-disciplinary research team draws upon expertise from work psychologists and computer scientists. Despite the fact we are working within a relatively novel and complex domain, it is clear that intelligent agents offer potential for fostering sustainable organizational capabilities in the future.
Resumo:
Process modeling is an emergent area of Information Systems research that is characterized through an abundance of conceptual work with little empirical research. To fill this gap, this paper reports on the development and validation of an instrument to measure user acceptance of process modeling grammars. We advance an extended model for a multi-stage measurement instrument development procedure, which incorporates feedback from both expert and user panels. We identify two main contributions: First, we provide a validated measurement instrument for the study of user acceptance of process modeling grammars, which can be used to assist in further empirical studies that investigate phenomena associated with the business process modeling domain. Second, in doing so, we describe in detail a procedural model for developing measurement instruments that ensures high levels of reliability and validity, which may assist fellow scholars in executing their empirical research.
Resumo:
IEC 61850 Process Bus technology has the potential to improve cost, performance and reliability of substation design. Substantial costs associated with copper wiring (designing, documentation, construction, commissioning and troubleshooting) can be reduced with the application of digital Process Bus technology, especially those based upon international standards. An IEC 61850-9-2 based sampled value Process Bus is an enabling technology for the application of Non-Conventional Instrument Transformers (NCIT). Retaining the output of the NCIT in its native digital form, rather than conversion to an analogue output, allows for improved transient performance, dynamic range, safety, reliability and reduced cost. In this paper we report on a pilot installation using NCITs communicating across a switched Ethernet network using the UCAIug Implementation Guideline for IEC 61850-9-2 (9-2 Light Edition or 9-2LE). This system was commissioned in a 275 kV Line Reactor bay at Powerlink Queensland’s Braemar substation in 2009, with sampled value protection IEDs 'shadowing' the existing protection system. The results of commissioning tests and twelve months of service experience using a Fibre Optic Current Transformer (FOCT) from Smart Digital Optics (SDO) are presented, including the response of the system to fault conditions. A number of remaining issues to be resolved to enable wide-scale deployment of NCITs and IEC 61850-9-2 Process Bus technology are also discussed.
Resumo:
Paired speaking tests are now commonly used in both high-stakes testing and classroom assessment contexts. The co-construction of discourse by candidates is regarded as a strength of paired speaking tests, as candidates have the opportunity to display a wider range of interactional competencies, including turn taking, initiating topics and engaging in extended discourse with a partner, rather than an examiner. However, the impact of the interlocutor in such jointly negotiated discourse and the implications for assessing interactional competence are areas of concern. This article reports on the features of interactional competence that were salient to four trained raters of 12 paired speaking tests through the analysis of rater notes, stimulated verbal recalls and rater discussions. Findings enabled the identification of features of the performance noted by raters when awarding scores for interactional competence, and the particular features associated with higher and lower scores. A number of these features were seen by the raters as mutual achievements, which raises the issue of the extent to which it is possible to assess individual contributions to the co-constructed performance. The findings have implications for defining the construct of interactional competence in paired speaking tests and operationalising this in rating scales.
Resumo:
Age-related Macular Degeneration (AMD) is one of the major causes of vision loss and blindness in ageing population. Currently, there is no cure for AMD, however early detection and subsequent treatment may prevent the severe vision loss or slow the progression of the disease. AMD can be classified into two types: dry and wet AMDs. The people with macular degeneration are mostly affected by dry AMD. Early symptoms of AMD are formation of drusen and yellow pigmentation. These lesions are identified by manual inspection of fundus images by the ophthalmologists. It is a time consuming, tiresome process, and hence an automated diagnosis of AMD screening tool can aid clinicians in their diagnosis significantly. This study proposes an automated dry AMD detection system using various entropies (Shannon, Kapur, Renyi and Yager), Higher Order Spectra (HOS) bispectra features, Fractional Dimension (FD), and Gabor wavelet features extracted from greyscale fundus images. The features are ranked using t-test, Kullback–Lieber Divergence (KLD), Chernoff Bound and Bhattacharyya Distance (CBBD), Receiver Operating Characteristics (ROC) curve-based and Wilcoxon ranking methods in order to select optimum features and classified into normal and AMD classes using Naive Bayes (NB), k-Nearest Neighbour (k-NN), Probabilistic Neural Network (PNN), Decision Tree (DT) and Support Vector Machine (SVM) classifiers. The performance of the proposed system is evaluated using private (Kasturba Medical Hospital, Manipal, India), Automated Retinal Image Analysis (ARIA) and STructured Analysis of the Retina (STARE) datasets. The proposed system yielded the highest average classification accuracies of 90.19%, 95.07% and 95% with 42, 54 and 38 optimal ranked features using SVM classifier for private, ARIA and STARE datasets respectively. This automated AMD detection system can be used for mass fundus image screening and aid clinicians by making better use of their expertise on selected images that require further examination.
Resumo:
Selection of features that will permit accurate pattern classification is a difficult task. However, if a particular data set is represented by discrete valued features, it becomes possible to determine empirically the contribution that each feature makes to the discrimination between classes. This paper extends the discrimination bound method so that both the maximum and average discrimination expected on unseen test data can be estimated. These estimation techniques are the basis of a backwards elimination algorithm that can be use to rank features in order of their discriminative power. Two problems are used to demonstrate this feature selection process: classification of the Mushroom Database, and a real-world, pregnancy related medical risk prediction task - assessment of risk of perinatal death.
Resumo:
Business processes are prone to continuous and unexpected changes. Process workers may start executing a process differently in order to adjust to changes in workload, season, guidelines or regulations for example. Early detection of business process changes based on their event logs – also known as business process drift detection – enables analysts to identify and act upon changes that may otherwise affect process performance. Previous methods for business process drift detection are based on an exploration of a potentially large feature space and in some cases they require users to manually identify the specific features that characterize the drift. Depending on the explored feature set, these methods may miss certain types of changes. This paper proposes a fully automated and statistically grounded method for detecting process drift. The core idea is to perform statistical tests over the distributions of runs observed in two consecutive time windows. By adaptively sizing the window, the method strikes a trade-off between classification accuracy and drift detection delay. A validation on synthetic and real-life logs shows that the method accurately detects typical change patterns and scales up to the extent it is applicable for online drift detection.
Resumo:
One of the foremost design considerations in microelectronics miniaturization is the use of embedded passives which provide practical solution. In a typical circuit, over 80 percent of the electronic components are passives such as resistors, inductors, and capacitors that could take up to almost 50 percent of the entire printed circuit board area. By integrating passive components within the substrate instead of being on the surface, embedded passives reduce the system real estate, eliminate the need for discrete and assembly, enhance electrical performance and reliability, and potentially reduce the overall cost. Moreover, it is lead free. Even with these advantages, embedded passive technology is at a relatively immature stage and more characterization and optimization are needed for practical applications leading to its commercialization.This paper presents an entire process from design and fabrication to electrical characterization and reliability test of embedded passives on multilayered microvia organic substrate. Two test vehicles focusing on resistors and capacitors have been designed and fabricated. Embedded capacitors in this study are made with polymer/ceramic nanocomposite (BaTiO3) material to take advantage of low processing temperature of polymers and relatively high dielectric constant of ceramics and the values of these capacitors range from 50 pF to 1.5 nF with capacitance per area of approximately 1.5 nF/cm(2). Limited high frequency measurement of these capacitors was performed. Furthermore, reliability assessments of thermal shock and temperature humidity tests based on JEDEC standards were carried out. Resistors used in this work have been of three types: 1) carbon ink based polymer thick film (PTF), 2) resistor foils with known sheet resistivities which are laminated to printed wiring board (PWB) during a sequential build-up (SBU) process and 3) thin-film resistor plating by electroless method. Realization of embedded resistors on conventional board-level high-loss epoxy (similar to 0.015 at 1 GHz) and proposed low-loss BCB dielectric (similar to 0.0008 at > 40 GHz) has been explored in this study. Ni-P and Ni-W-P alloys were plated using conventional electroless plating, and NiCr and NiCrAlSi foils were used for the foil transfer process. For the first time, Benzocyclobutene (BCB) has been proposed as a board level dielectric for advanced System-on-Package (SOP) module primarily due to its attractive low-loss (for RF application) and thin film (for high density wiring) properties.Although embedded passives are more reliable by eliminating solder joint interconnects, they also introduce other concerns such as cracks, delamination and component instability. More layers may be needed to accommodate the embedded passives, and various materials within the substrate may cause significant thermo -mechanical stress due to coefficient of thermal expansion (CTE) mismatch. In this work, numerical models of embedded capacitors have been developed to qualitatively examine the effects of process conditions and electrical performance due to thermo-mechanical deformations.Also, a prototype working product with the board level design including features of embedded resistors and capacitors are underway. Preliminary results of these are presented.
Resumo:
Laminated composite structures are susceptible to damage under impacts with attendant properly degradation. While studies on damage tolerance behaviour are emphasised and the findings reported, the citations correlating impacts with the fracture features are limited. In the present study, therefore, attempts have been made to depict how the transition of the fracture features take place depending on the type and extent of defect introduced onto the carbon-epoxy system. The test specimens were subjected to differing levels of low energy pendulum impacts with a view to have specimens with varying levels of intial impacts history. Into such specimens, additional defect in the form of slits of varying depths were introduced by a mechanical process. The test coupons were then allowed to fail by impact. The fracture surface was studied under scanning electron microscope. The fractographic features that appear, based on the induced/inserted defects, are presented in this paper. It was noticed that the energy absorbed for final fracture could be associated with the defect introduced into the system. It was also observed that the size of the mechanically inserted defect had a significant influence on the features of the fracture surface.
Resumo:
The utilisation of computational fluid dynamics (CFD) in process safety has increased significantly in recent years. The modelling of accidental explosion via CFD has in many cases replaced the classical Multi Energy and Brake Strehlow methods. The benefits obtained with CFD modelling can be diminished if proper modelling of the initial phase of explosion is neglected. In the early stages of an explosion, the flame propagates in a quasi-laminar regime. Proper modelling of the initial laminar phase is a key aspect in order to predict the peak pressure and the time to peak pressure. The present work suggests a modelling approach for the initial laminar phase in explosion scenarios. Findings are compared with experimental data for two classical explosion test cases which resemble the common features in chemical process areas (confinement and congestion). A detailed analysis of the threshold for the transition from laminar to turbulent regime is also carried out. The modelling is implemented in a fully 3D Navier-Stokes compressible formulation. Combustion is treated using a laminar flamelet approach based on the Bray, Moss and Libby (BML) formulation. A novel modified porosity approach developed for the unstructured solver is also considered. Results agree satisfactorily with experiments and the modelling is found to be robust. © 2013 The Institution of Chemical Engineers.
Resumo:
The wave energy industry is progressing towards an advanced stage of development, with consideration being given to the selection of suitable sites for the first commercial installations. An informed, and accurate, characterisation of the wave energy resource is an essential aspect of this process. Ireland is exposed to an energetic wave climate, however many features of this resource are not well understood. This thesis assesses and characterises the wave energy resource that has been measured and modelled at the Atlantic Marine Energy Test Site, a facility for conducting sea trials of floating wave energy converters that is being developed near Belmullet, on the west coast of Ireland. This characterisation process is undertaken through the analysis of metocean datasets that have previously been unavailable for exposed Irish sites. A number of commonly made assumptions in the calculation of wave power are contested, and the uncertainties resulting from their application are demonstrated. The relationship between commonly used wave period parameters is studied, and its importance in the calculation of wave power quantified, while it is also shown that a disconnect exists between the sea states which occur most frequently at the site and those that contribute most to the incident wave energy. Additionally, observations of the extreme wave conditions that have occurred at the site and estimates of future storms that devices will need to withstand are presented. The implications of these results for the design and operation of wave energy converters are discussed. The foremost contribution of this thesis is the development of an enhanced understanding of the fundamental nature of the wave energy resource at the Atlantic Marine Energy Test Site. The results presented here also have a wider relevance, and can be considered typical of other, similarly exposed, locations on Ireland’s west coast.
Resumo:
Tese de mestrado em Química Tecnológica, apresentada à Universidade de Lisboa, através da Faculdade de Ciências, 2016
Resumo:
Code clones are portions of source code which are similar to the original program code. The presence of code clones is considered as a bad feature of software as the maintenance of software becomes difficult due to the presence of code clones. Methods for code clone detection have gained immense significance in the last few years as they play a significant role in engineering applications such as analysis of program code, program understanding, plagiarism detection, error detection, code compaction and many more similar tasks. Despite of all these facts, several features of code clones if properly utilized can make software development process easier. In this work, we have pointed out such a feature of code clones which highlight the relevance of code clones in test sequence identification. Here program slicing is used in code clone detection. In addition, a classification of code clones is presented and the benefit of using program slicing in code clone detection is also mentioned in this work.
Resumo:
This paper presents a writer identification scheme for Malayalam documents. As the accomplishment rate of a scheme is highly dependent on the features extracted from the documents, the process of feature selection and extraction is highly relevant. The paper describes a set of novel features exclusively for Malayalam language. The features were studied in detail which resulted in a comparative study of all the features. The features are fused to form the feature vector or knowledge vector. This knowledge vector is then used in all the phases of the writer identification scheme. The scheme has been tested on a test bed of 280 writers of which 50 writers having only one page, 215 writers with at least 2 pages and 15 writers with at least 4 pages. To perform a comparative evaluation of the scheme the test is conducted using WD-LBP method also. A recognition rate of around 95% was obtained for the proposed approach