247 resultados para two-tier board model


Relevância:

100.00% 100.00%

Publicador:

Resumo:

Objective: The Brief Michigan Alcoholism Screening Test (bMAST) is a 10-item test derived from the 25-item Michigan Alcoholism Screening Test (MAST). It is widely used in the assessment of alcohol dependence. In the absence of previous validation studies, the principal aim of this study was to assess the validity and reliability of the bMAST as a measure of the severity of problem drinking. Method: There were 6,594 patients (4,854 men, 1,740 women) who had been referred for alcohol-use disorders to a hospital alcohol and drug service who voluntarily participated in this study. Results: An exploratory factor analysis defined a two-factor solution, consisting of Perception of Current Drinking and Drinking Consequences factors. Structural equation modeling confirmed that the fit of a nine-item, two-factor model was superior to the original one-factor model. Concurrent validity was assessed through simultaneous administration of the Alcohol Use Disorders Identification Test (AUDIT) and associations with alcohol consumption and clinically assessed features of alcohol dependence. The two-factor bMAST model showed moderate correlations with the AUDIT. The two-factor bMAST and AUDIT were similarly associated with quantity of alcohol consumption and clinically assessed dependence severity features. No differences were observed between the existing weighted scoring system and the proposed simple scoring system. Conclusions: In this study, both the existing bMAST total score and the two-factor model identified were as effective as the AUDIT in assessing problem drinking severity. There are additional advantages of employing the two-factor bMAST in the assessment and treatment planning of patients seeking treatment for alcohol-use disorders. (J. Stud. Alcohol Drugs 68: 771-779,2007)

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Current estimates of soil C storage potential are based on models or factors that assume linearity between C input levels and C stocks at steady-state, implying that SOC stocks could increase without limit as C input levels increase. However, some soils show little or no increase in steady-state SOC stock with increasing C input levels suggesting that SOC can become saturated with respect to C input. We used long-term field experiment data to assess alternative hypotheses of soil carbon storage by three simple models: a linear model (no saturation), a one-pool whole-soil C saturation model, and a two-pool mixed model with C saturation of a single C pool, but not the whole soil. The one-pool C saturation model best fit the combined data from 14 sites, four individual sites were best-fit with the linear model, and no sites were best fit by the mixed model. These results indicate that existing agricultural field experiments generally have too small a range in C input levels to show saturation behavior, and verify the accepted linear relationship between soil C and C input used to model SOM dynamics. However, all sites combined and the site with the widest range in C input levels were best fit with the C-saturation model. Nevertheless, the same site produced distinct effective stabilization capacity curves rather than an absolute C saturation level. We conclude that the saturation of soil C does occur and therefore the greatest efficiency in soil C sequestration will be in soils further from C saturation.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The present rate of technological advance continues to place significant demands on data storage devices. The sheer amount of digital data being generated each year along with consumer expectations, fuels these demands. At present, most digital data is stored magnetically, in the form of hard disk drives or on magnetic tape. The increase in areal density (AD) of magnetic hard disk drives over the past 50 years has been of the order of 100 million times, and current devices are storing data at ADs of the order of hundreds of gigabits per square inch. However, it has been known for some time that the progress in this form of data storage is approaching fundamental limits. The main limitation relates to the lower size limit that an individual bit can have for stable storage. Various techniques for overcoming these fundamental limits are currently the focus of considerable research effort. Most attempt to improve current data storage methods, or modify these slightly for higher density storage. Alternatively, three dimensional optical data storage is a promising field for the information storage needs of the future, offering very high density, high speed memory. There are two ways in which data may be recorded in a three dimensional optical medium; either bit-by-bit (similar in principle to an optical disc medium such as CD or DVD) or by using pages of bit data. Bit-by-bit techniques for three dimensional storage offer high density but are inherently slow due to the serial nature of data access. Page-based techniques, where a two-dimensional page of data bits is written in one write operation, can offer significantly higher data rates, due to their parallel nature. Holographic Data Storage (HDS) is one such page-oriented optical memory technique. This field of research has been active for several decades, but with few commercial products presently available. Another page-oriented optical memory technique involves recording pages of data as phase masks in a photorefractive medium. A photorefractive material is one by which the refractive index can be modified by light of the appropriate wavelength and intensity, and this property can be used to store information in these materials. In phase mask storage, two dimensional pages of data are recorded into a photorefractive crystal, as refractive index changes in the medium. A low-intensity readout beam propagating through the medium will have its intensity profile modified by these refractive index changes and a CCD camera can be used to monitor the readout beam, and thus read the stored data. The main aim of this research was to investigate data storage using phase masks in the photorefractive crystal, lithium niobate (LiNbO3). Firstly the experimental methods for storing the two dimensional pages of data (a set of vertical stripes of varying lengths) in the medium are presented. The laser beam used for writing, whose intensity profile is modified by an amplitudemask which contains a pattern of the information to be stored, illuminates the lithium niobate crystal and the photorefractive effect causes the patterns to be stored as refractive index changes in the medium. These patterns are read out non-destructively using a low intensity probe beam and a CCD camera. A common complication of information storage in photorefractive crystals is the issue of destructive readout. This is a problem particularly for holographic data storage, where the readout beam should be at the same wavelength as the beam used for writing. Since the charge carriers in the medium are still sensitive to the read light field, the readout beam erases the stored information. A method to avoid this is by using thermal fixing. Here the photorefractive medium is heated to temperatures above 150�C; this process forms an ionic grating in the medium. This ionic grating is insensitive to the readout beam and therefore the information is not erased during readout. A non-contact method for determining temperature change in a lithium niobate crystal is presented in this thesis. The temperature-dependent birefringent properties of the medium cause intensity oscillations to be observed for a beam propagating through the medium during a change in temperature. It is shown that each oscillation corresponds to a particular temperature change, and by counting the number of oscillations observed, the temperature change of the medium can be deduced. The presented technique for measuring temperature change could easily be applied to a situation where thermal fixing of data in a photorefractive medium is required. Furthermore, by using an expanded beam and monitoring the intensity oscillations over a wide region, it is shown that the temperature in various locations of the crystal can be monitored simultaneously. This technique could be used to deduce temperature gradients in the medium. It is shown that the three dimensional nature of the recording medium causes interesting degradation effects to occur when the patterns are written for a longer-than-optimal time. This degradation results in the splitting of the vertical stripes in the data pattern, and for long writing exposure times this process can result in the complete deterioration of the information in the medium. It is shown in that simply by using incoherent illumination, the original pattern can be recovered from the degraded state. The reason for the recovery is that the refractive index changes causing the degradation are of a smaller magnitude since they are induced by the write field components scattered from the written structures. During incoherent erasure, the lower magnitude refractive index changes are neutralised first, allowing the original pattern to be recovered. The degradation process is shown to be reversed during the recovery process, and a simple relationship is found relating the time at which particular features appear during degradation and recovery. A further outcome of this work is that the minimum stripe width of 30 ìm is required for accurate storage and recovery of the information in the medium, any size smaller than this results in incomplete recovery. The degradation and recovery process could be applied to an application in image scrambling or cryptography for optical information storage. A two dimensional numerical model based on the finite-difference beam propagation method (FD-BPM) is presented and used to gain insight into the pattern storage process. The model shows that the degradation of the patterns is due to the complicated path taken by the write beam as it propagates through the crystal, and in particular the scattering of this beam from the induced refractive index structures in the medium. The model indicates that the highest quality pattern storage would be achieved with a thin 0.5 mm medium; however this type of medium would also remove the degradation property of the patterns and the subsequent recovery process. To overcome the simplistic treatment of the refractive index change in the FD-BPM model, a fully three dimensional photorefractive model developed by Devaux is presented. This model shows significant insight into the pattern storage, particularly for the degradation and recovery process, and confirms the theory that the recovery of the degraded patterns is possible since the refractive index changes responsible for the degradation are of a smaller magnitude. Finally, detailed analysis of the pattern formation and degradation dynamics for periodic patterns of various periodicities is presented. It is shown that stripe widths in the write beam of greater than 150 ìm result in the formation of different types of refractive index changes, compared with the stripes of smaller widths. As a result, it is shown that the pattern storage method discussed in this thesis has an upper feature size limit of 150 ìm, for accurate and reliable pattern storage.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Metallic materials exposed to oxygen-enriched atmospheres – as commonly used in the medical, aerospace, aviation and numerous chemical processing industries – represent a significant fire hazard which must be addressed during design, maintenance and operation. Hence, accurate knowledge of metallic materials flammability is required. Reduced gravity (i.e. space-based) operations present additional unique concerns, where the absence of gravity must also be taken into account. The flammability of metallic materials has historically been quantified using three standardised test methods developed by NASA, ASTM and ISO. These tests typically involve the forceful (promoted) ignition of a test sample (typically a 3.2 mm diameter cylindrical rod) in pressurised oxygen. A test sample is defined as flammable when it undergoes burning that is independent of the ignition process utilised. In the standardised tests, this is indicated by the propagation of burning further than a defined amount, or „burn criterion.. The burn criterion in use at the onset of this project was arbitrarily selected, and did not accurately reflect the length a sample must burn in order to be burning independent of the ignition event and, in some cases, required complete consumption of the test sample for a metallic material to be considered flammable. It has been demonstrated that a) a metallic material.s propensity to support burning is altered by any increase in test sample temperature greater than ~250-300 oC and b) promoted ignition causes an increase in temperature of the test sample in the region closest to the igniter, a region referred to as the Heat Affected Zone (HAZ). If a test sample continues to burn past the HAZ (where the HAZ is defined as the region of the test sample above the igniter that undergoes an increase in temperature of greater than or equal to 250 oC by the end of the ignition event), it is burning independent of the igniter, and should be considered flammable. The extent of the HAZ, therefore, can be used to justify the selection of the burn criterion. A two dimensional mathematical model was developed in order to predict the extent of the HAZ created in a standard test sample by a typical igniter. The model was validated against previous theoretical and experimental work performed in collaboration with NASA, and then used to predict the extent of the HAZ for different metallic materials in several configurations. The extent of HAZ predicted varied significantly, ranging from ~2-27 mm depending on the test sample thermal properties and test conditions (i.e. pressure). The magnitude of the HAZ was found to increase with increasing thermal diffusivity, and decreasing pressure (due to slower ignition times). Based upon the findings of this work, a new burn criterion requiring 30 mm of the test sample to be consumed (from the top of the ignition promoter) was recommended and validated. This new burn criterion was subsequently included in the latest revision of the ASTM G124 and NASA 6001B international test standards that are used to evaluate metallic material flammability in oxygen. These revisions also have the added benefit of enabling the conduct of reduced gravity metallic material flammability testing in strict accordance with the ASTM G124 standard, allowing measurement and comparison of the relative flammability (i.e. Lowest Burn Pressure (LBP), Highest No-Burn Pressure (HNBP) and average Regression Rate of the Melting Interface(RRMI)) of metallic materials in normal and reduced gravity, as well as determination of the applicability of normal gravity test results to reduced gravity use environments. This is important, as currently most space-based applications will typically use normal gravity information in order to qualify systems and/or components for reduced gravity use. This is shown here to be non-conservative for metallic materials which are more flammable in reduced gravity. The flammability of two metallic materials, Inconel® 718 and 316 stainless steel (both commonly used to manufacture components for oxygen service in both terrestrial and space-based systems) was evaluated in normal and reduced gravity using the new ASTM G124-10 test standard. This allowed direct comparison of the flammability of the two metallic materials in normal gravity and reduced gravity respectively. The results of this work clearly show, for the first time, that metallic materials are more flammable in reduced gravity than in normal gravity when testing is conducted as described in the ASTM G124-10 test standard. This was shown to be the case in terms of both higher regression rates (i.e. faster consumption of the test sample – fuel), and burning at lower pressures in reduced gravity. Specifically, it was found that the LBP for 3.2 mm diameter Inconel® 718 and 316 stainless steel test samples decreased by 50% from 3.45 MPa (500 psia) in normal gravity to 1.72 MPa (250 psia) in reduced gravity for the Inconel® 718, and 25% from 3.45 MPa (500 psia) in normal gravity to 2.76 MPa (400 psia) in reduced gravity for the 316 stainless steel. The average RRMI increased by factors of 2.2 (27.2 mm/s in 2.24 MPa (325 psia) oxygen in reduced gravity compared to 12.8 mm/s in 4.48 MPa (650 psia) oxygen in normal gravity) for the Inconel® 718 and 1.6 (15.0 mm/s in 2.76 MPa (400 psia) oxygen in reduced gravity compared to 9.5 mm/s in 5.17 MPa (750 psia) oxygen in normal gravity) for the 316 stainless steel. Reasons for the increased flammability of metallic materials in reduced gravity compared to normal gravity are discussed, based upon the observations made during reduced gravity testing and previous work. Finally, the implications (for fire safety and engineering applications) of these results are presented and discussed, in particular, examining methods for mitigating the risk of a fire in reduced gravity.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Travel in passenger cars is a ubiquitous aspect of the daily activities of many people. During the 2009 influenza A (H1N1) pandemic a case of probable transmission during car travel was reported in Australia, to which spread via the airborne route may have contributed. However, there are no data to indicate the likely risks of such events, and how they may vary and be mitigated. To address this knowledge gap, we estimated the risk of airborne influenza transmission in two cars (1989 model and 2005 model) by employing ventilation measurements and a variation of the Wells-Riley model. Results suggested that infection risk can be reduced by not recirculating air; however, estimated risk ranged from 59 to 99.9% for a 90 min trip when air was recirculated in the newer vehicle. These results have implications for interrupting in-car transmission of other illnesses spread by the airborne route.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Cognitive radio is an emerging technology proposing the concept of dynamic spec- trum access as a solution to the looming problem of spectrum scarcity caused by the growth in wireless communication systems. Under the proposed concept, non- licensed, secondary users (SU) can access spectrum owned by licensed, primary users (PU) so long as interference to PU are kept minimal. Spectrum sensing is a crucial task in cognitive radio whereby the SU senses the spectrum to detect the presence or absence of any PU signal. Conventional spectrum sensing assumes the PU signal as ‘stationary’ and remains in the same activity state during the sensing cycle, while an emerging trend models PU as ‘non-stationary’ and undergoes state changes. Existing studies have focused on non-stationary PU during the transmission period, however very little research considered the impact on spectrum sensing when the PU is non-stationary during the sensing period. The concept of PU duty cycle is developed as a tool to analyse the performance of spectrum sensing detectors when detecting non-stationary PU signals. New detectors are also proposed to optimise detection with respect to duty cycle ex- hibited by the PU. This research consists of two major investigations. The first stage investigates the impact of duty cycle on the performance of existing detec- tors and the extent of the problem in existing studies. The second stage develops new detection models and frameworks to ensure the integrity of spectrum sensing when detecting non-stationary PU signals. The first investigation demonstrates that conventional signal model formulated for stationary PU does not accurately reflect the behaviour of a non-stationary PU. Therefore the performance calculated and assumed to be achievable by the conventional detector does not reflect actual performance achieved. Through analysing the statistical properties of duty cycle, performance degradation is proved to be a problem that cannot be easily neglected in existing sensing studies when PU is modelled as non-stationary. The second investigation presents detectors that are aware of the duty cycle ex- hibited by a non-stationary PU. A two stage detection model is proposed to improve the detection performance and robustness to changes in duty cycle. This detector is most suitable for applications that require long sensing periods. A second detector, the duty cycle based energy detector is formulated by integrat- ing the distribution of duty cycle into the test statistic of the energy detector and suitable for short sensing periods. The decision threshold is optimised with respect to the traffic model of the PU, hence the proposed detector can calculate average detection performance that reflect realistic results. A detection framework for the application of spectrum sensing optimisation is proposed to provide clear guidance on the constraints on sensing and detection model. Following this framework will ensure the signal model accurately reflects practical behaviour while the detection model implemented is also suitable for the desired detection assumption. Based on this framework, a spectrum sensing optimisation algorithm is further developed to maximise the sensing efficiency for non-stationary PU. New optimisation constraints are derived to account for any PU state changes within the sensing cycle while implementing the proposed duty cycle based detector.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This study compared the performance of a local and three robust optimality criteria in terms of the standard error for a one-parameter and a two-parameter nonlinear model with uncertainty in the parameter values. The designs were also compared in conditions where there was misspecification in the prior parameter distribution. The impact of different correlation between parameters on the optimal design was examined in the two-parameter model. The designs and standard errors were solved analytically whenever possible and numerically otherwise.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Reliable communications is one of the major concerns in wireless sensor networks (WSNs). Multipath routing is an effective way to improve communication reliability in WSNs. However, most of existing multipath routing protocols for sensor networks are reactive and require dynamic route discovery. If there are many sensor nodes from a source to a destination, the route discovery process will create a long end-to-end transmission delay, which causes difficulties in some time-critical applications. To overcome this difficulty, the efficient route update and maintenance processes are proposed in this paper. It aims to limit the amount of routing overhead with two-tier routing architecture and introduce the combination of piggyback and trigger update to replace the periodic update process, which is the main source of unnecessary routing overhead. Simulations are carried out to demonstrate the effectiveness of the proposed processes in improvement of total amount of routing overhead over existing popular routing protocols.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Carrion-breeding Sarcophagidae (Diptera) can be used to estimate the post-mortem interval (PMI) in forensic cases. Difficulties with accurate morphological identifications at any life stage and a lack of documented thermobiological profiles have limited their current usefulness of these flies. The molecular-based approach of DNA barcoding, which utilises a 648-bp fragment of the mitochondrial cytochrome oxidase subunit I gene, was previously evaluated in a pilot study for the discrimination between 16 Australian sarcophagids. The current study comprehensively evaluated DNA barcoding on a larger taxon set of 588 adult Australian sarcophagids. A total of 39 of the 84 known Australian species were represented by 580 specimens, which includes 92% of potentially forensically important species. A further eight specimens could not be reliably identified, but included as six unidentifable taxa. A neighbour-joining phylogenetic tree was generated and nucleotide sequence divergences were calculated using the Kimura-two-parameter distance model. All species except Sarcophaga (Fergusonimyia) bancroftorum, known for high morphological variability, were resolved as reciprocally monophyletic (99.2% of cases), with most having bootstrap support of 100. Excluding S. bancroftorum, the mean intraspecific and interspecific variation ranged from 0.00-1.12% and 2.81-11.23%, respectively, allowing for species discrimination. DNA barcoding was therefore validated as a suitable method for the molecular identification of the Australian Sarcophagidae, which will aid in the implementation of this fauna in forensic entomology.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Portable water-filled road barriers (PWFB) are roadside structures placed on temporary construction zones to separate work site from moving traffic. Recent changes in governing standards require PWFB to adhere to strict compliance in terms of lateral displacement of the road barriers and vehicle redirectionality. Actual road safety barrier test can be very costly, thus researchers resort to Finite Element Analysis (FEA) in the initial designs phase prior to real vehicle test. There has been many research conducted on concrete barriers and flexible steel barriers using FEA, however not many is done pertaining to PWFB. This research probes a new method to model joint mechanism in PWFB. Two methods to model the joining mechanism are presented and discussed in relation to its practicality and accuracy to real work applications. Moreover, the study of the physical gap and mass of the barrier was investigated. Outcome from this research will benefit PWFB research and allow road barrier designers better knowledge in developing the next generation of road safety structures.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Background and significance: Nurses' job dissatisfaction is associated with negative nursing and patient outcomes. One of the most powerful reasons for nurses to stay in an organisation is satisfaction with leadership. However, nurses are frequently promoted to leadership positions without appropriate preparation for the role. Although a number of leadership programs have been described, none have been tested for effectiveness, using a randomised control trial methodology. Aims: The aims of this research were to develop an evidence based leadership program and to test its effectiveness on nurse unit managers' (NUMs') and nursing staff's (NS's) job satisfaction, and on the leader behaviour scores of nurse unit managers. Methods: First, the study used a comprehensive literature review to examine the evidence on job satisfaction, leadership and front-line manager competencies. From this evidence a summary of leadership practices was developed to construct a two component leadership model. The components of this model were then combined with the evidence distilled from previous leadership development programs to develop a Leadership Development Program (LDP). This evidence integrated the program's design, its contents, teaching strategies and learning environment. Central to the LDP were the evidence-based leadership practices associated with increasing nurses' job satisfaction. A randomised controlled trial (RCT) design was employed for this research to test the effectiveness of the LDP. A RCT is one of the most powerful tools of research and the use of this method makes this study unique, as a RCT has never been used previously to evaluate any leadership program for front-line nurse managers. Thirty-nine consenting nurse unit managers from a large tertiary hospital were randomly allocated to receive either the leadership program or only the program's written information about leadership. Demographic baseline data were collected from participants in the NUM groups and the nursing staff who reported to them. Validated questionnaires measuring job satisfaction and leader behaviours were administered at baseline, at three months after the commencement of the intervention and at six months after the commencement of the intervention, to the nurse unit managers and to the NS. Independent and paired t-tests were used to analyse continuous outcome variables and Chi Square tests were used for categorical data. Results: The study found that the nurse unit managers' overall job satisfaction score was higher at 3-months (p = 0.016) and at 6-months p = 0.027) post commencement of the intervention in the intervention group compared with the control group. Similarly, at 3-months testing, mean scores in the intervention group were higher in five of the six "positive" sub-categories of the leader behaviour scale when compared to the control group. There was a significant difference in one sub-category; effectiveness, p = 0.015. No differences were observed in leadership behaviour scores between groups by 6-months post commencement of the intervention. Over time, at three month and six month testing there were significant increases in four transformational leader behaviour scores and in one positive transactional leader behaviour scores in the intervention group. Over time at 3-month testing, there were significant increases in the three leader behaviour outcome scores, however at 6-months testing; only one of these leader behaviour outcome scores remained significantly increased. Job satisfaction scores were not significantly increased between the NS groups at three months and at six months post commencement of the intervention. However, over time within the intervention group at 6-month testing there was a significant increase in job satisfaction scores of NS. There were no significant increases in NUM leader behaviour scores in the intervention group, as rated by the nursing staff who reported to them. Over time, at 3-month testing, NS rated nurse unit managers' leader behaviour scores significantly lower in two leader behaviours and two leader behaviour outcome scores. At 6-month testing, over time, one leader behaviour score was rated significantly lower and the nontransactional leader behaviour was rated significantly higher. Discussion: The study represents the first attempt to test the effectiveness of a leadership development program (LDP) for nurse unit managers using a RCT. The program's design, contents, teaching strategies and learning environment were based on a summary of the literature. The overall improvement in role satisfaction was sustained for at least 6-months post intervention. The study's results may reflect the program's evidence-based approach to developing the LDP, which increased the nurse unit managers' confidence in their role and thereby their job satisfaction. Two other factors possibly contributed to nurse unit managers' increased job satisfaction scores. These are: the program's teaching strategies, which included the involvement of the executive nursing team of the hospital, and the fact that the LDP provided recognition of the importance of the NUM role within the hospital. Consequently, participating in the program may have led to nurse unit managers feeling valued and rewarded for their service; hence more satisfied. Leadership behaviours remaining unchanged between groups at the 6 months data collection time may relate to the LDP needing to be conducted for a longer time period. This is suggested because within the intervention group, over time, at 3 and 6 months there were significant increases in self-reported leader behaviours. The lack of significant changes in leader behaviour scores between groups may equally signify that leader behaviours require different interventions to achieve change. Nursing staff results suggest that the LDP's design needs to consider involving NS in the program's aims and progress from the outset. It is also possible that by including regular feedback from NS to the nurse unit managers during the LDP that NS's job satisfaction and their perception of nurse unit managers' leader behaviours may alter. Conclusion/Implications: This study highlights the value of providing an evidence-based leadership program to nurse unit managers to increase their job satisfaction. The evidence based leadership program increased job satisfaction but its effect on leadership behaviour was only seen over time. Further research is required to test interventions which attempt to change leader behaviours. Also further research on NS' job satisfaction is required to test the indirect effects of LDP on NS whose nurse unit managers participate in LDPs.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Dose-finding designs estimate the dose level of a drug based on observed adverse events. Relatedness of the adverse event to the drug has been generally ignored in all proposed design methodologies. These designs assume that the adverse events observed during a trial are definitely related to the drug, which can lead to flawed dose-level estimation. We incorporate adverse event relatedness into the so-called continual reassessment method. Adverse events that have ‘doubtful’ or ‘possible’ relationships to the drug are modelled using a two-parameter logistic model with an additive probability mass. Adverse events ‘probably’ or ‘definitely’ related to the drug are modelled using a cumulative logistic model. To search for the maximum tolerated dose, we use the maximum estimated toxicity probability of these two adverse event relatedness categories. We conduct a simulation study that illustrates the characteristics of the design under various scenarios. This article demonstrates that adverse event relatedness is important for improved dose estimation. It opens up further research pathways into continual reassessment design methodologies.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

We construct a two-scale mathematical model for modern, high-rate LiFePO4cathodes. We attempt to validate against experimental data using two forms of the phase-field model developed recently to represent the concentration of Li+ in nano-sized LiFePO4crystals. We also compare this with the shrinking-core based model we developed previously. Validating against high-rate experimental data, in which electronic and electrolytic resistances have been reduced is an excellent test of the validity of the crystal-scale model used to represent the phase-change that may occur in LiFePO4material. We obtain poor fits with the shrinking-core based model, even with fitting based on “effective” parameter values. Surprisingly, using the more sophisticated phase-field models on the crystal-scale results in poorer fits, though a significant parameter regime could not be investigated due to numerical difficulties. Separate to the fits obtained, using phase-field based models embedded in a two-scale cathodic model results in “many-particle” effects consistent with those reported recently.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The significance of dialogue to public relations is a persistent and widespread theme in both industry and the academy (International Communication Association, 2013). Dialogue is integral to a number of theoretical perspectives in public relations, from the instrumentalist/functionalist through to the rise of the influence of the two-way symmetric model (Grunig & Hunt, 1984). The emergence of the relational perspective – with its emphasis on dialogue as a means of achieving mutually-beneficial relationships between organisations and stakeholders – brought attention to dialogue as a discrete concept (see, for example, Ledingham, 2003; and 2006). Dialogue continues to be an implicit element in the development of new perspectives on public relations, such as Holtzhausen and Voto’s (2002) postmodern approach...