123 resultados para analytical method validation

em Queensland University of Technology - ePrints Archive


Relevância:

100.00% 100.00%

Publicador:

Resumo:

Three recent papers published in Chemical Engineering Journal studied the solution of a model of diffusion and nonlinear reaction using three different methods. Two of these studies obtained series solutions using specialized mathematical methods, known as the Adomian decomposition method and the homotopy analysis method. Subsequently it was shown that the solution of the same particular model could be written in terms of a transcendental function called Gauss’ hypergeometric function. These three previous approaches focused on one particular reactive transport model. This particular model ignored advective transport and considered one specific reaction term only. Here we generalize these previous approaches and develop an exact analytical solution for a general class of steady state reactive transport models that incorporate (i) combined advective and diffusive transport, and (ii) any sufficiently differentiable reaction term R(C). The new solution is a convergent Maclaurin series. The Maclaurin series solution can be derived without any specialized mathematical methods nor does it necessarily involve the computation of any transcendental function. Applying the Maclaurin series solution to certain case studies shows that the previously published solutions are particular cases of the more general solution outlined here. We also demonstrate the accuracy of the Maclaurin series solution by comparing with numerical solutions for particular cases.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The emergence of highly chloroquine (CQ) resistant P. vivax in Southeast Asia has created an urgent need for an improved understanding of the mechanisms of drug resistance in these parasites, the development of robust tools for defining the spread of resistance, and the discovery of new antimalarial agents. The ex vivo Schizont Maturation Test (SMT), originally developed for the study of P. falciparum, has been modified for P. vivax. We retrospectively analysed the results from 760 parasite isolates assessed by the modified SMT to investigate the relationship between parasite growth dynamics and parasite susceptibility to antimalarial drugs. Previous observations of the stage-specific activity of CQ against P. vivax were confirmed, and shown to have profound consequences for interpretation of the assay. Using a nonlinear model we show increased duration of the assay and a higher proportion of ring stages in the initial blood sample were associated with decreased effective concentration (EC50) values of CQ, and identify a threshold where these associations no longer hold. Thus, starting composition of parasites in the SMT and duration of the assay can have a profound effect on the calculated EC50 for CQ. Our findings indicate that EC50 values from assays with a duration less than 34 hours do not truly reflect the sensitivity of the parasite to CQ, nor an assay where the proportion of ring stage parasites at the start of the assay does not exceed 66%. Application of this threshold modelling approach suggests that similar issues may occur for susceptibility testing of amodiaquine and mefloquine. The statistical methodology which has been developed also provides a novel means of detecting stage-specific drug activity for new antimalarials.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

A simple and sensitive spectrophotometric method for the simultaneous determination of acesulfame-K, sodium cyclamate and saccharin sodium sweeteners in foodstuff samples has been researched and developed. This analytical method relies on the different kinetic rates of the analytes in their oxidative reaction with KMnO4 to produce the green manganate product in an alkaline solution. As the kinetic rates of acesulfame-K, sodium cyclamate and saccharin sodium were similar and their kinetic data seriously overlapped, chemometrics methods, such as partial least squares (PLS), principal component regression (PCR) and classical least squares (CLS), were applied to resolve the kinetic data. The results showed that the PLS prediction model performed somewhat better. The proposed method was then applied for the determination of the three sweeteners in foodstuff samples, and the results compared well with those obtained by the reference HPLC method.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

With a view to assessing the vulnerability of columns to low elevation vehicular impacts, a non-linear explicit numerical model has been developed and validated using existing experimental results. The numerical model accounts for the effects of strain rate and confinement of the reinforced concrete, which are fundamental to the successful prediction of the impact response. The sensitivity of the material model parameters used for the validation is also scrutinised and numerical tests are performed to examine their suitability to simulate the shear failure conditions. Conflicting views on the strain gradient effects are discussed and the validation process is extended to investigate the ability of the equations developed under concentric loading conditions to simulate flexural failure events. Experimental data on impact force–time histories, mid span and residual deflections and support reactions have been verified against corresponding numerical results. A universal technique which can be applied to determine the vulnerability of the impacted columns against collisions with new generation vehicles under the most common impact modes is proposed. Additionally, the observed failure characteristics of the impacted columns are explained using extended outcomes. Based on the overall results, an analytical method is suggested to quantify the vulnerability of the columns.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Columns are one of the key load bearing elements that are highly susceptible to vehicle impacts. The resulting severe damages to columns may leads to failures of the supporting structure that are catastrophic in nature. However, the columns in existing structures are seldom designed for impact due to inadequacies of design guidelines. The impact behaviour of columns designed for gravity loads and actions other than impact is, therefore, of an interest. A comprehensive investigation is conducted on reinforced concrete column with a particular focus on investigating the vulnerability of the exposed columns and to implement mitigation techniques under low to medium velocity car and truck impacts. The investigation is based on non-linear explicit computer simulations of impacted columns followed by a comprehensive validation process. The impact is simulated using force pulses generated from full scale vehicle impact tests. A material model capable of simulating triaxial loading conditions is used in the analyses. Circular columns adequate in capacity for five to twenty story buildings, designed according to Australian standards are considered in the investigation. The crucial parameters associated with the routine column designs and the different load combinations applied at the serviceability stage on the typical columns are considered in detail. Axially loaded columns are examined at the initial stage and the investigation is extended to analyse the impact behaviour under single axis bending and biaxial bending. The impact capacity reduction under varying axial loads is also investigated. Effects of the various load combinations are quantified and residual capacity of the impacted columns based on the status of the damage and mitigation techniques are also presented. In addition, the contribution of the individual parameter to the failure load is scrutinized and analytical equations are developed to identify the critical impulses in terms of the geometrical and material properties of the impacted column. In particular, an innovative technique was developed and introduced to improve the accuracy of the equations where the other techniques are failed due to the shape of the error distribution. Above all, the equations can be used to quantify the critical impulse for three consecutive points (load combinations) located on the interaction diagram for one particular column. Consequently, linear interpolation can be used to quantify the critical impulse for the loading points that are located in-between on the interaction diagram. Having provided a known force and impulse pair for an average impact duration, this method can be extended to assess the vulnerability of columns for a general vehicle population based on an analytical method that can be used to quantify the critical peak forces under different impact durations. Therefore the contribution of this research is not only limited to produce simplified yet rational design guidelines and equations, but also provides a comprehensive solution to quantify the impact capacity while delivering new insight to the scientific community for dealing with impacts.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

This article explores power within legal education scholarship. It suggests that power relations are not effectively reflected on within this scholarship, and it provokes legal educators to consider power more explicitly and effectively. It then outlines in-depth a conceptual and methodological approach based on Michel Foucault’s concept of ‘governmentality’ to assist in such an analysis. By detailing the conceptual moves required in order to research power in legal education more effectively, this article seeks to stimulate new reflection and thought about the practice and scholarship of legal education, and allow for political interventions to become more ethically sensitive and potentially more effective.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

An analytical method for the detection of carbonaceous gases by a non-dispersive infrared sensor (NDIR) has been developed. The calibration plots of six carbonaceous gases including CO2, CH4, CO, C2H2, C2H4 and C2H6 were obtained and the reproducibility determined to verify the feasibility of this gas monitoring method. The results prove that squared correlation coefficients for the six gas measurements are greater than 0.999. The reproducibility is excellent, thus indicating that this analytical method is useful to determinate the concentrations of carbonaceous gases.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Reactive oxygen species are generated during ischaemia-reperfusion of tissue. Oxidation of thymidine by hydroxyl radicals (HO) leads to the formation of 5,6-dihydroxy-5,6-dihydrothymidine (thymidine glycol). Thymidine glycol is excreted in urine and can be used as biomarker of oxidative DNA damage. Time dependent changes in urinary excretion rates of thymidine glycol were determined in six patients after kidney transplantation and in six healthy controls. A new analytical method was developed involving affinity chromatography and subsequent reverse-phase high-performance liquid chromatography (RP-HPLC) with a post-column chemical reaction detector and endpoint fluorescence detection. The detection limit of this fluorimetric assay was 1.6 ng thymidine glycol per ml urine, which corresponds to about half of the physiological excretion level in healthy control persons. After kidney transplantation the urinary excretion rate of thymidine glycol increased gradually reaching a maximum around 48 h. The excretion rate remained elevated until the end of the observation period of 10 days. Severe proteinuria with an excretion rate of up to 7.2 g of total protein per mmol creatinine was also observed immediately after transplantation and declined within the first 24 h of allograft function (0.35 + 0.26 g/mmol creatinine). The protein excretion pattern, based on separation of urinary proteins on sodium dodecyl sulphate-polyacrylamide gel electrophorosis (SDS-PAGE), as well as excretion of individual biomarker proteins, indicated nonselective glomerular and tubular damage. The increased excretion of thymidine glycol after kidney transplantation may be explained by ischaemia-reperfusion induced oxidative DNA damage of the transplanted kidney.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Introduction QC, EQA and method evaluation are integral to delivery of quality patient results. To ensure QUT graduates have a solid grounding in these key areas of practice, a theory-to-practice approach is used to progressively develop and consolidate these skills. Methods Using a BCG assay for serum albumin, each student undertakes an eight week project analysing two levels of QC alongside ‘patient’ samples. Results are assessed using both single rules and Multirules. Concomitantly with the QC analyses, an EQA project is undertaken; students analyse two EQA samples, twice in the semester. Results are submitted using cloud software and data for the full ‘peer group’ returned to students in spreadsheets and incomplete Youden plots. Youden plots are completed with target values and calculated ALP values and analysed for ‘lab’ and method performance. The method has a low-level positive bias, which leads to the need to investigate an alternative method. Building directly on the EQA of the first project and using the scenario of a lab that services renal patients, students undertake a method validation comparing BCP and BCG assays in another eight-week project. Precision and patient comparison studies allow students to assess whether the BCP method addresses the proportional bias of the BCG method and overall is a ‘better’ alternative method for analysing serum albumin, accounting for pragmatic factors, such as cost, as well as performance characteristics. Results Students develop understanding of the purpose and importance of QC and EQA in delivering quality results, the need to optimise testing to deliver quality results and importantly, a working knowledge of the analyses that go into ensuring this quality. In parallel to developing these key workplace competencies, students become confident, competent practitioners, able to pipette accurately and precisely and organise themselves in a busy, time pressured work environment.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Quantification of pyridoxal-5´-phosphate (PLP) in biological samples is challenging due to the presence of endogenous PLP in matrices used for preparation of calibrators and quality control samples (QCs). Hence, we have developed an LC-MS/MS method for accurate and precise measurement of the concentrations of PLP in samples (20 µL) of human whole blood that addresses this issue by using a surrogate matrix and minimizing the matrix effect. We used a surrogate matrix comprising 2% bovine serum albumin (BSA) in phosphate buffer saline (PBS) for making calibrators, QCs and the concentrations were adjusted to include the endogenous PLP concentrations in the surrogate matrix according to the method of standard addition. PLP was separated from the other components of the sample matrix using protein precipitation with trichloroacetic acid 10% w/v. After centrifugation, supernatant were injected directly into the LC-MS/MS system. Calibration curves were linear and recovery was > 92%. QCs were accurate, precise, stable for four freeze-thaw cycles, and following storage at room temperature for 17h or at -80 °C for 3 months. There was no significant matrix effect using 9 different individual human blood samples. Our novel LC-MS/MS method has satisfied all of the criteria specified in the 2012 EMEA guideline on bioanalytical method validation.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Diffusion in a composite slab consisting of a large number of layers provides an ideal prototype problem for developing and analysing two-scale modelling approaches for heterogeneous media. Numerous analytical techniques have been proposed for solving the transient diffusion equation in a one-dimensional composite slab consisting of an arbitrary number of layers. Most of these approaches, however, require the solution of a complex transcendental equation arising from a matrix determinant for the eigenvalues that is difficult to solve numerically for a large number of layers. To overcome this issue, in this paper, we present a semi-analytical method based on the Laplace transform and an orthogonal eigenfunction expansion. The proposed approach uses eigenvalues local to each layer that can be obtained either explicitly, or by solving simple transcendental equations. The semi-analytical solution is applicable to both perfect and imperfect contact at the interfaces between adjacent layers and either Dirichlet, Neumann or Robin boundary conditions at the ends of the slab. The solution approach is verified for several test cases and is shown to work well for a large number of layers. The work is concluded with an application to macroscopic modelling where the solution of a fine-scale multilayered medium consisting of two hundred layers is compared against an “up-scaled” variant of the same problem involving only ten layers.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

In what follows, I put forward an argument for an analytical method for social science that operates at the level of genre. I argue that generic convergence, generic hybridity, and generic instability provide us with a powerful perspectives on changes in political, cultural, and economic relationships, most specifically at the level of institutions. Such a perspective can help us identify the transitional elements, relationships, and trajectories that define the place of our current system in history, thereby grounding our understanding of possible futures.1 In historically contextualising our present with this method, my concern is to indicate possibilities for the future. Systemic contradictions indicate possibility spaces within which systemic change must and will emerge. We live in a system currently dominated by many fully-expressed contradictions, and so in the presence of many possible futures. The contradictions of the current age are expressed most overtly in the public genres of power politics. Contemporary public policy—indeed politics in general-is an excellent focus for any investigation of possible futures, precisely because of its future-oriented function. It is overtly hortatory; it is designed ‘to get people to do things’ (Muntigl in press: 147). There is no point in trying to get people to do things in the past. Consequently, policy discourse is inherently oriented towards creating some future state of affairs (Graham in press), along with concomitant ways of being, knowing, representing, and acting (Fairclough 2000).

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Principal Topic : According to Shane & Venkataraman (2000) entrepreneurship consists of the recognition and exploitation of venture ideas - or opportunities as they often called - to create future goods and services. This definition puts venture ideas is at the heart of entrepreneurship research. Substantial research has been done on venture ideas in order to enhance our understanding of this phenomenon (e.g. Choi & Shepherd, 2004; Shane, 2000; Shepherd & DeTienne, 2005). However, we are yet to learn what factors drive entrepreneurs' perceptions of the relative attractiveness of venture ideas, and how important different idea characteristics are for such assessments. Ruef (2002) recognized that there is an uneven distribution of venture ideas undertaken by entrepreneurs in the USA. A majority introduce either a new product/service or access a new market or market segment. A smaller percentage of entrepreneurs introduce a new method of production, organizing, or distribution. This implies that some forms of venture ideas are perceived by entrepreneurs as more important or valuable than others. However, Ruef does not provide any information regarding why some forms of venture ideas are more common than others among entrepreneurs. Therefore, this study empirically investigates what factors affect the attractiveness of venture ideas as well as their relative importance. Based on two key characteristics of venture ideas, namely venture idea newness and relatedness, our study investigates how different types and degrees of newness and relatedness of venture ideas affect their attractiveness as perceived by expert entrepreneurs. Methodology/Key : Propositions According to Schumpeter (1934) entrepreneurs introduce different types of venture ideas such as new products/services, new method of production, enter into new markets/customer and new method of promotion. Further, according to Schumpeter (1934) and Kirzner (1973) venture ideas introduced to the market range along a continuum of innovative to imitative ideas. The distinction between these two extremes of venture idea highlights an important property of venture idea, namely their newness. Entrepreneurs, in order to gain competitive advantage or above average returns introduce their venture ideas which may be either new to the world, new to the market that they seek to enter, substantially improved from current offerings and an imitative form of existing offerings. Expert entrepreneurs may be more attracted to venture ideas that exhibit high degree of newness because of the higher newness is coupled with increased market potential (Drucker, 1985) Moreover, certain individual characteristics also affect the attractiveness of venture idea. According to Shane (2000), individual's prior knowledge is closely associated with the recognition of venture ideas. Sarasvathy's (2001) Effectuation theory proposes a high degree of relatedness between venture ideas and the resource position of the individual. Thus, entrepreneurs may be more attracted to venture ideas that are closely aligned with the knowledge and/or resources they already possess. On the other hand, the potential financial gain (Shepherd & DeTienne, 2005) may be larger for ideas that are not close to the entrepreneurs' home turf. Therefore, potential financial gain is a stimulus that has to be considered separately. We aim to examine how entrepreneurs weigh considerations of different forms of newness and relatedness as well as potential financial gain in assessing the attractiveness of venture ideas. We use conjoint analysis to determine how expert entrepreneurs develop preferences for venture ideas which involved with different degrees of newness, relatedness and potential gain. This analytical method paves way to measure the trade-offs they make when choosing a particular venture idea. The conjoint analysis estimates respondents' preferences in terms of utilities (or part-worth) for each level of newness, relatedness and potential gain of venture ideas. A sample of 50 expert entrepreneurs who were awarded young entrepreneurship awards in Sri Lanka in 2007 is used for interviews. Each respondent is interviewed providing with 32 scenarios which explicate different combinations of possible profiles open them into consideration. Conjoint software (SPSS) is used to analyse data. Results and Implications : The data collection of this study is still underway. However, results of this study will provide information regarding the attractiveness of each level of newness, relatedness and potential gain of venture idea and their relative importance in a business model. Additionally, these results provide important implications for entrepreneurs, consultants and other stakeholders as regards the importance of different of attributes of venture idea coupled with different levels. Entrepreneurs, consultants and other stakeholders could make decisions accordingly.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

In a resource constrained business world, strategic choices must be made on process improvement and service delivery. There are calls for more agile forms of enterprises and much effort is being directed at moving organizations from a complex landscape of disparate application systems to that of an integrated and flexible enterprise accessing complex systems landscapes through service oriented architecture (SOA). This paper describes the analysis of strategies to detect supporting business services. These services can then be delivered in a variety of ways: web-services, new application services or outsourced services. The focus of this paper is on strategy analysis to identify those strategies that are common to lines of business and thus can be supported through shared services. A case study of a state government is used to show the analytical method and the detection of shared strategies.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Determination of the placement and rating of transformers and feeders are the main objective of the basic distribution network planning. The bus voltage and the feeder current are two constraints which should be maintained within their standard range. The distribution network planning is hardened when the planning area is located far from the sources of power generation and the infrastructure. This is mainly as a consequence of the voltage drop, line loss and system reliability. Long distance to supply loads causes a significant amount of voltage drop across the distribution lines. Capacitors and Voltage Regulators (VRs) can be installed to decrease the voltage drop. This long distance also increases the probability of occurrence of a failure. This high probability leads the network reliability to be low. Cross-Connections (CC) and Distributed Generators (DGs) are devices which can be employed for improving system reliability. Another main factor which should be considered in planning of distribution networks (in both rural and urban areas) is load growth. For supporting this factor, transformers and feeders are conventionally upgraded which applies a large cost. Installation of DGs and capacitors in a distribution network can alleviate this issue while the other benefits are gained. In this research, a comprehensive planning is presented for the distribution networks. Since the distribution network is composed of low and medium voltage networks, both are included in this procedure. However, the main focus of this research is on the medium voltage network planning. The main objective is to minimize the investment cost, the line loss, and the reliability indices for a study timeframe and to support load growth. The investment cost is related to the distribution network elements such as the transformers, feeders, capacitors, VRs, CCs, and DGs. The voltage drop and the feeder current as the constraints are maintained within their standard range. In addition to minimizing the reliability and line loss costs, the planned network should support a continual growth of loads, which is an essential concern in planning distribution networks. In this thesis, a novel segmentation-based strategy is proposed for including this factor. Using this strategy, the computation time is significantly reduced compared with the exhaustive search method as the accuracy is still acceptable. In addition to being applicable for considering the load growth, this strategy is appropriate for inclusion of practical load characteristic (dynamic), as demonstrated in this thesis. The allocation and sizing problem has a discrete nature with several local minima. This highlights the importance of selecting a proper optimization method. Modified discrete particle swarm optimization as a heuristic method is introduced in this research to solve this complex planning problem. Discrete nonlinear programming and genetic algorithm as an analytical and a heuristic method respectively are also applied to this problem to evaluate the proposed optimization method.