613 resultados para formal method
Resumo:
Background & Research Focus Managing knowledge for innovation and organisational benefit has been extensively investigated in studies of large firms (Smith, Collins & Clark, 2005; Zucker, et al., 2007) and to a large extent there is limited research into studies of small- and medium- sized enterprises (SMEs). There are some investigations in knowledge management research on SMEs, but what remains to be seen in particular is the question of where are the potential challenges for managing knowledge more effectively within these firms? Effective knowledge management (KM) processes and systems lead to improved performance in pursuing distinct capabilities that contribute to firm-level innovation (Nassim 2009; Zucker et al. 2007; Verona and Ravasi 2003). Managing internal and external knowledge in a way that links it closely to the innovation process can assist the creation and implementation of new products and services. KM is particularly important in knowledge intensive firms where the knowledge requirements are highly specialized, diverse and often emergent. However, to a large extent the KM processes of small firms that are often the source of new knowledge and an important element of the value networks of larger companies have not been closely studied. To address this gap which is of increasing importance with the growing number of small firms, we need to further investigate knowledge management processes and the ways that firms find, capture, apply and integrate knowledge from multiple sources for their innovation process. This study builds on the previous literature and applies existing frameworks and takes the process and activity view of knowledge management as a starting point of departure (see among others Kraaijenbrink, Wijnhoven & Groen, 2007; Enberg, Lindkvist, & Tell, 2006; Lu, Wang & Mao, 2007). In this paper, it is attempted to develop a better understanding of the challenges of knowledge management within the innovation process in small knowledge-oriented firms. The paper aims to explore knowledge management processes and practices in firms that are engaged in the new product/service development programs. Consistent with the exploratory character of the study, the research question is: How is knowledge integrated, sourced and recombined from internal and external sources for innovation and new product development? Research Method The research took an exploratory case study approach and developed a theoretical framework to investigate the knowledge situation of knowledge-intensive firms. Equipped with the conceptual foundation, the research adopted a multiple case study method investigating four diverse Australian knowledge-intensive firms from IT, biotechnology, nanotechnology and biochemistry industries. The multiple case study method allowed us to document in some depth the knowledge management experience of the theses firms. Case study data were collected through a review of company published data and semi-structured interviews with managers using an interview guide to ensure uniform coverage of the research themes. This interview guide was developed following development of the framework and a review of the methodologies and issues covered by similar studies in other countries and used some questions common to these studies. It was framed to gather data around knowledge management activity within the business, focusing on the identification, acquisition and utilisation of knowledge, but collecting a range of information about subject as well. The focus of the case studies was on the use of external and internal knowledge to support their knowledge intensive products and services. Key Findings Firstly a conceptual and strategic knowledge management framework has been developed. The knowledge determinants are related to the nature of knowledge, organisational context, and mechanism of the linkages between internal and external knowledge. Overall, a number of key observations derived from this study, which demonstrated the challenges of managing knowledge and how important KM is as a management tool for innovation process in knowledge-oriented firms. To summarise, findings suggest that knowledge management process in these firms is very much project focused and not embedded within the overall organisational routines and mainly based on ad hoc and informal processes. Our findings highlighted lack of formal knowledge management process within our sampled firms. This point to the need for more specialised capabilities in knowledge management for these firms. We observed a need for an effective knowledge transfer support system which is required to facilitate knowledge sharing and particularly capturing and transferring tacit knowledge from one team members to another. In sum, our findings indicate that building effective and adaptive IT systems to manage and share knowledge in the firm is one of the biggest challenges for these small firms. Also, there is little explicit strategy in small knowledge-intensive firms that is targeted at systematic KM either at the strategic or operational level. Therefore, a strategic approach to managing knowledge for innovation as well as leadership and management are essential to achieving effective KM. In particular, research findings demonstrate that gathering tacit knowledge, internal and external to the organization, and applying processes to ensure the availability of knowledge for innovation teams, drives down the risks and cost of innovation. KM activities and tools, such as KM systems, environmental scanning, benchmarking, intranets, firm-wide databases and communities of practice to acquire knowledge and to make it accessible, were elements of KM. Practical Implications The case study method that used in this study provides practical insight into the knowledge management process within Australian knowledge-intensive firms. It also provides useful lessons which can be used by other firms in managing the knowledge more effectively in the innovation process. The findings would be helpful for small firms that may be searching for a practical method for managing and integrating their specialised knowledge. Using the results of this exploratory study and to address the challenges of knowledge management, this study proposes five practices that are discussed in the paper for managing knowledge more efficiently to improve innovation: (1) Knowledge-based firms must be strategic in knowledge management processes for innovation, (2) Leadership and management should encourage various practices for knowledge management, (3) Capturing and sharing tacit knowledge is critical and should be managed, (4)Team knowledge integration practices should be developed, (5) Knowledge management and integration through communication networks, and technology systems should be encouraged and strengthen. In sum, the main managerial contribution of the paper is the recognition of knowledge determinants and processes, and their effects on the effective knowledge management within firm. This may serve as a useful benchmark in the strategic planning of the firm as it utilises new and specialised knowledge.
Resumo:
BACKGROUND: The use of salivary diagnostics is increasing because of its noninvasiveness, ease of sampling, and the relatively low risk of contracting infectious organisms. Saliva has been used as a biological fluid to identify and validate RNA targets in head and neck cancer patients. The goal of this study was to develop a robust, easy, and cost-effective method for isolating high yields of total RNA from saliva for downstream expression studies. METHODS: Oral whole saliva (200 mu L) was collected from healthy controls (n = 6) and from patients with head and neck cancer (n = 8). The method developed in-house used QIAzol lysis reagent (Qiagen) to extract RNA from saliva (both cell-free supernatants and cell pellets), followed by isopropyl alcohol precipitation, cDNA synthesis, and real-time PCR analyses for the genes encoding beta-actin ("housekeeping" gene) and histatin (a salivary gland-specific gene). RESULTS: The in-house QIAzol lysis reagent produced a high yield of total RNA (0.89 -7.1 mu g) from saliva (cell-free saliva and cell pellet) after DNase treatment. The ratio of the absorbance measured at 260 nm to that at 280 nm ranged from 1.6 to 1.9. The commercial kit produced a 10-fold lower RNA yield. Using our method with the QIAzol lysis reagent, we were also able to isolate RNA from archived saliva samples that had been stored without RNase inhibitors at -80 degrees C for >2 years. CONCLUSIONS: Our in-house QIAzol method is robust, is simple, provides RNA at high yields, and can be implemented to allow saliva transcriptomic studies to be translated into a clinical setting.
Resumo:
The measurements of plasma natriuretic peptides (NT-proBNP, proBNP and BNP) are used to diagnose heart failure but these are expensive to produce. We describe a rapid, cheap and facile production of proteins for immunoassays of heart failure. DNA encoding N-terminally His-tagged NT-proBNP and proBNP were cloned into the pJexpress404 vector. ProBNP and NT-proBNP peptides were expressed in Escherichia coli, purified and refolded in vitro. The analytical performance of these peptides were comparable with commercial analytes (NT-proBNP EC50 for the recombinant is 2.6 ng/ml and for the commercial material is 5.3 ng/ml) and the EC50 for recombinant and commercial proBNP, are 3.6 and 5.7 ng/ml respectively). Total yield of purified refolded NT-proBNP peptide was 1.75 mg/l and proBNP was 0.088 mg/l. This approach may also be useful in expressing other protein analytes for immunoassay applications. To develop a cost effective protein expression method in E. coli to obtain high yields of NT-proBNP (1.75 mg/l) and proBNP (0.088 mg/l) peptides for immunoassay use.
Resumo:
The textual turn is a good friend of expert spectating, where it assumes the role of writing-productive apparatus, but no friend at all of expert practices or practitioners (Melrose, 2003). Introduction The challenge of time-based embodied performance when the artefact is unstable As a former full-time professional practitioner with an embodied dance practice as performer, choreographer and artistic director for three decades, I somewhat unexpectedly entered the world of academia in 2000 after completing a practice-based PhD, which was described by its examiners as ‘pioneering’. Like many artists my intention was to deepen and extend my practice through formal research into my work and its context (which was intercultural) and to privilege the artist’s voice in a research world where it was too often silent. Practice as research, practice-based research, and practice-led research were not yet fully named. It was in its infancy and my biggest challenge was to find a serviceable methodology which did not betray my intentions to keep practice at the centre of the research. Over the last 15 years, practice led doctoral research, where examinable creative work is placed alongside an accompanying (exegetical) written component, has come a long way. It has been extensively debated with a range of theories and models proposed (Barrett & Bolt, 2007, Pakes, 2003 & 2004, Piccini, 2005, Philips, Stock & Vincs 2009, Stock, 2009 & 2010, Riley & Hunter 2009, Haseman, 2006, Hecq, 2012). Much of this writing is based around epistemological concerns where the research methodologies proposed normally incorporate a contextualisation of the creative work in its field of practice, and more importantly validation and interrogation of the processes of the practice as the central ‘data gathering’ method. It is now widely accepted, at least in the Australian creative arts context, that knowledge claims in creative practice research arise from the material activities of the practice itself (Carter, 2004). The creative work explicated as the tangible outcome of that practice is sometimes referred to as the ‘artefact’. Although the making of the artefact, according to Colbert (2009, p. 7) is influenced by “personal, experiential and iterative processes”, mapping them through a research pathway is “difficult to predict [for] “the adjustments made to the artefact in the light of emerging knowledge and insights cannot be foreshadowed”. Linking the process and the practice outcome most often occurs through the textual intervention of an exegesis which builds, and/or builds on, theoretical concerns arising in and from the work. This linking produces what Barrett (2007) refers to as “situated knowledge… that operates in relation to established knowledge” (p. 145). But what if those material forms or ‘artefacts’ are not objects or code or digitised forms, but live within the bodies of artist/researchers where the nature of the practice itself is live, ephemeral and constantly transforming, as in dance and physical performance? Even more unsettling is when the ‘artefact’ is literally embedded and embodied in the work and in the maker/researcher; when subject and object are merged. To complicate matters, the performing arts are necessarily collaborative, relying not only on technical mastery and creative/interpretive processes, but on social and artistic relationships which collectively make up the ‘artefact’. This chapter explores issues surrounding live dance and physical performance when placed in a research setting, specifically the complexities of being required to translate embodied dance findings into textual form. Exploring how embodied knowledge can be shared in a research context for those with no experiential knowledge of communicating through and in dance, I draw on theories of “dance enaction” (Warburton, 2011) together with notions of “affective intensities” and “performance mastery” (Melrose, 2003), “intentional activity” (Pakes, 2004) and the place of memory. In seeking ways to capture in another form the knowledge residing in live dance practice, thus making implicit knowledge explicit, I further propose there is a process of triple translation as the performance (the living ‘artefact’) is documented in multi-facetted ways to produce something durable which can be re-visited. This translation becomes more complex if the embodied knowledge resides in culturally specific practices, formed by world views and processes quite different from accepted norms and conventions (even radical ones) of international doctoral research inquiry. But whatever the combination of cultural, virtual and genre-related dance practices being researched, embodiment is central to the process, outcome and findings, and the question remains of how we will use text and what forms that text might take.
Resumo:
In a tag-based recommender system, the multi-dimensional
Resumo:
Demand response can be used for providing regulation services in the electricity markets. The retailers can bid in a day-ahead market and respond to real-time regulation signal by load control. This paper proposes a new stochastic ranking method to provide regulation services via demand response. A pool of thermostatically controllable appliances (TCAs) such as air conditioners and water heaters are adjusted using direct load control method. The selection of appliances is based on a probabilistic ranking technique utilizing attributes such as temperature variation and statuses of TCAs. These attributes are stochastically forecasted for the next time step using day-ahead information. System performance is analyzed with a sample regulation signal. Network capability to provide regulation services under various seasons is analyzed. The effect of network size on the regulation services is also investigated.
Resumo:
The ambiguity acceptance test is an important quality control procedure in high precision GNSS data processing. Although the ambiguity acceptance test methods have been extensively investigated, its threshold determine method is still not well understood. Currently, the threshold is determined with the empirical approach or the fixed failure rate (FF-) approach. The empirical approach is simple but lacking in theoretical basis, while the FF-approach is theoretical rigorous but computationally demanding. Hence, the key of the threshold determination problem is how to efficiently determine the threshold in a reasonable way. In this study, a new threshold determination method named threshold function method is proposed to reduce the complexity of the FF-approach. The threshold function method simplifies the FF-approach by a modeling procedure and an approximation procedure. The modeling procedure uses a rational function model to describe the relationship between the FF-difference test threshold and the integer least-squares (ILS) success rate. The approximation procedure replaces the ILS success rate with the easy-to-calculate integer bootstrapping (IB) success rate. Corresponding modeling error and approximation error are analysed with simulation data to avoid nuisance biases and unrealistic stochastic model impact. The results indicate the proposed method can greatly simplify the FF-approach without introducing significant modeling error. The threshold function method makes the fixed failure rate threshold determination method feasible for real-time applications.
Resumo:
The phenomenon which dialogism addresses is human interaction. It enables us to conceptualise human interaction as intersubjective, symbolic, cultural, transformative and conflictual, in short, as complex. The complexity of human interaction is evident in all domains of human life, for example, in therapy, education, health intervention, communication, and coordination at all levels. A dialogical approach starts by acknowledging that the social world is perspectival, that people and groups inhabit different social realities. This book stands apart from the proliferation of recent books on dialogism, because rather than applying dialogism to this or that domain, the present volume focuses on dialogicality itself to interrogate the concepts and methods which are taken for granted in the burgeoning literature.
Resumo:
Low voltage distribution networks feature a high degree of load unbalance and the addition of rooftop photovoltaic is driving further unbalances in the network. Single phase consumers are distributed across the phases but even if the consumer distribution was well balanced when the network was constructed changes will occur over time. Distribution transformer losses are increased by unbalanced loadings. The estimation of transformer losses is a necessary part of the routine upgrading and replacement of transformers and the identification of the phase connections of households allows a precise estimation of the phase loadings and total transformer loss. This paper presents a new technique and preliminary test results for a method of automatically identifying the phase of each customer by correlating voltage information from the utility's transformer system with voltage information from customer smart meters. The techniques are novel as they are purely based upon a time series of electrical voltage measurements taken at the household and at the distribution transformer. Experimental results using a combination of electrical power and current of the real smart meter datasets demonstrate the performance of our techniques.
Resumo:
A fractional FitzHugh–Nagumo monodomain model with zero Dirichlet boundary conditions is presented, generalising the standard monodomain model that describes the propagation of the electrical potential in heterogeneous cardiac tissue. The model consists of a coupled fractional Riesz space nonlinear reaction-diffusion model and a system of ordinary differential equations, describing the ionic fluxes as a function of the membrane potential. We solve this model by decoupling the space-fractional partial differential equation and the system of ordinary differential equations at each time step. Thus, this means treating the fractional Riesz space nonlinear reaction-diffusion model as if the nonlinear source term is only locally Lipschitz. The fractional Riesz space nonlinear reaction-diffusion model is solved using an implicit numerical method with the shifted Grunwald–Letnikov approximation, and the stability and convergence are discussed in detail in the context of the local Lipschitz property. Some numerical examples are given to show the consistency of our computational approach.
Resumo:
The application of robotics to protein crystallization trials has resulted in the production of millions of images. Manual inspection of these images to find crystals and other interesting outcomes is a major rate-limiting step. As a result there has been intense activity in developing automated algorithms to analyse these images. The very first step for most systems that have been described in the literature is to delineate each droplet. Here, a novel approach that reaches over 97% success rate and subsecond processing times is presented. This will form the seed of a new high-throughput system to scrutinize massive crystallization campaigns automatically. © 2010 International Union of Crystallography Printed in Singapore-all rights reserved.
Resumo:
Ambiguity validation as an important procedure of integer ambiguity resolution is to test the correctness of the fixed integer ambiguity of phase measurements before being used for positioning computation. Most existing investigations on ambiguity validation focus on test statistic. How to determine the threshold more reasonably is less understood, although it is one of the most important topics in ambiguity validation. Currently, there are two threshold determination methods in the ambiguity validation procedure: the empirical approach and the fixed failure rate (FF-) approach. The empirical approach is simple but lacks of theoretical basis. The fixed failure rate approach has a rigorous probability theory basis, but it employs a more complicated procedure. This paper focuses on how to determine the threshold easily and reasonably. Both FF-ratio test and FF-difference test are investigated in this research and the extensive simulation results show that the FF-difference test can achieve comparable or even better performance than the well-known FF-ratio test. Another benefit of adopting the FF-difference test is that its threshold can be expressed as a function of integer least-squares (ILS) success rate with specified failure rate tolerance. Thus, a new threshold determination method named threshold function for the FF-difference test is proposed. The threshold function method preserves the fixed failure rate characteristic and is also easy-to-apply. The performance of the threshold function is validated with simulated data. The validation results show that with the threshold function method, the impact of the modelling error on the failure rate is less than 0.08%. Overall, the threshold function for the FF-difference test is a very promising threshold validation method and it makes the FF-approach applicable for the real-time GNSS positioning applications.
Resumo:
This paper describes our participation in the Chinese word segmentation task of CIPS-SIGHAN 2010. We implemented an n-gram mutual information (NGMI) based segmentation algorithm with the mixed-up features from unsupervised, supervised and dictionarybased segmentation methods. This algorithm is also combined with a simple strategy for out-of-vocabulary (OOV) word recognition. The evaluation for both open and closed training shows encouraging results of our system. The results for OOV word recognition in closed training evaluation were however found unsatisfactory.
Resumo:
Nowadays, integration of small-scale electricity generators, known as Distributed Generation (DG), into distribution networks has become increasingly popular. This tendency together with the falling price of DG units has a great potential in giving the DG a better chance to participate in voltage regulation process, in parallel with other regulating devices already available in the distribution systems. The voltage control issue turns out to be a very challenging problem for distribution engineers, since existing control coordination schemes need to be reconsidered to take into account the DG operation. In this paper, a control coordination approach is proposed, which is able to utilize the ability of the DG as a voltage regulator, and at the same time minimize the interaction of DG with another DG or other active devices, such as On-load Tap Changing Transformer (OLTC). The proposed technique has been developed based on the concepts of protection principles (magnitude grading and time grading) for response coordination of DG and other regulating devices and uses Advanced Line Drop Compensators (ALDCs) for implementation. A distribution feeder with tap changing transformer and DG units has been extracted from a practical system to test the proposed control technique. The results show that the proposed method provides an effective solution for coordination of DG with another DG or voltage regulating devices and the integration of protection principles has considerably reduced the control interaction to achieve the desired voltage correction.
Resumo:
With the increasing availability of high quality digital cameras that are easily operated by the non-professional photographer, the utility of using digital images to assess endpoints in clinical research of skin lesions has growing acceptance. However, rigorous protocols and description of experiences for digital image collection and assessment are not readily available, particularly for research conducted in remote settings. We describe the development and evaluation of a protocol for digital image collection by the non-professional photographer in a remote setting research trial, together with a novel methodology for assessment of clinical outcomes by an expert panel blinded to treatment allocation.