49 resultados para method of successive averages
Resumo:
The measurements of plasma natriuretic peptides (NT-proBNP, proBNP and BNP) are used to diagnose heart failure but these are expensive to produce. We describe a rapid, cheap and facile production of proteins for immunoassays of heart failure. DNA encoding N-terminally His-tagged NT-proBNP and proBNP were cloned into the pJexpress404 vector. ProBNP and NT-proBNP peptides were expressed in Escherichia coli, purified and refolded in vitro. The analytical performance of these peptides were comparable with commercial analytes (NT-proBNP EC50 for the recombinant is 2.6 ng/ml and for the commercial material is 5.3 ng/ml) and the EC50 for recombinant and commercial proBNP, are 3.6 and 5.7 ng/ml respectively). Total yield of purified refolded NT-proBNP peptide was 1.75 mg/l and proBNP was 0.088 mg/l. This approach may also be useful in expressing other protein analytes for immunoassay applications. To develop a cost effective protein expression method in E. coli to obtain high yields of NT-proBNP (1.75 mg/l) and proBNP (0.088 mg/l) peptides for immunoassay use.
Resumo:
Primary objective: To investigate whether assessment method influences the type of post-concussion-like symptoms. Methods and procedures: Participants were 73 Australian undergraduate students (Mage = 24.14, SD = 8.84; 75.3% female) with no history of mild traumatic brain injury (mTBI). Participants reported symptoms experienced over the previous 2 weeks in response to an open-ended question (free report), mock interview and standardized checklist (British Columbia Post-concussion Symptom Inventory; BC-PSI). Main outcomes and results: In the free report and checklist conditions, cognitive symptoms were reported significantly less frequently than affective (free report: p < 0.001; checklist: p < 0.001) or somatic symptoms (free report: p < 0.001; checklist: p = 0.004). However, in the mock structured interview condition, cognitive and somatic symptoms were reported significantly less frequently than affective symptoms (both p < 0.001). No participants reported at least one symptom from all three domains when assessed by free report, whereas most participants did so when symptoms were assessed by a mock structured interview (75%) or checklist (90%). Conclusions: Previous studies have shown that the method used to assess symptoms affects the number reported. This study shows that the assessment method also affects the type of reported symptoms.
Resumo:
The size and arrangement of stromal collagen fibrils (CFs) influence the optical properties of the cornea and hence its function. The spatial arrangement of the collagen is still questionable in relation to the diameter of collagen fibril. In the present study, we introduce a new parameter, edge-fibrillar distance (EFD) to measure how two collagen fibrils are spaced with respect to their closest edges and their spatial distribution through normalized standard deviation of EFD (NSDEFD) accessed through the application of two commercially available multipurpose solutions (MPS): ReNu and Hippia. The corneal buttons were soaked separately in ReNu and Hippia MPS for five hours, fixed overnight in 2.5% glutaraldehyde containing cuprolinic blue and processed for transmission electron microscopy. The electron micrographs were processed using ImageJ user-coded plugin. Statistical analysis was performed to compare the image processed equivalent diameter (ED), inter-fibrillar distance (IFD), and EFD of the CFs of treated versus normal corneas. The ReNu-soaked cornea resulted in partly degenerated epithelium with loose hemidesmosomes and Bowman’s collagen. In contrast, the epithelium of the cornea soaked in Hippia was degenerated or lost but showed closely packed Bowman’s collagen. Soaking the corneas in both MPS caused a statistically significant decrease in the anterior collagen fibril, ED and a significant change in IFD, and EFD than those of the untreated corneas (p < 0.05, for all comparisons). The introduction of EFD measurement in the study directly provided a sense of gap between periphery of the collagen bundles, their spatial distribution; and in combination with ED, they showed how the corneal collagen bundles are spaced in relation to their diameters. The spatial distribution parameter NSDEFD indicated that ReNu treated cornea fibrils were uniformly distributed spatially, followed by normal and Hippia. The EFD measurement with relatively lower standard deviation and NSDEFD, a characteristic of uniform CFs distribution, can be an additional parameter used in evaluating collagen organization and accessing the effects of various treatments on corneal health and transparency.
Resumo:
The objective of this study is to address the question: are those who leave suicide notes representative of the larger population of those who commit suicide? The method involves an analysis of a full population of suicides by residents of Queensland, Australia for the full year of 2004, with the information drawn from Coronial files. Our overall results suggest that, and in support of previous research, the population who leaves suicide notes are remarkably similar to those who do not. Differences are identified in four areas: first, and in contrast to prior research, females are less likely to leave a suicide note; second, and in support of previous research, Aboriginal Australians are less likely to leave suicide notes; third, and in support of some previous research, those who use gas as a method of suicide are more likely to leave notes, while those who use a vehicle or a train are less likely to leave notes; finally, our findings lend support to research which finds that those with a diagnosed mental illness are less likely to leave notes. The discussion addresses some of the reasons these disparities may have occurred, and continues the debate over the degree to which suicide notes give insight into the larger suicide population
Resumo:
This paper argues that the Panopticon is an accurate model for and illustration of policing and security methods in the modern society. Initially, I overview the theoretical concept of the Panopticon as a structure of perceived universal surveillance which facilitates automatic obedience in its subjects as identified by the theorists Jeremy Bentham and Michel Foucault. The paper subsequently moves to identify how the Panopticon, despite being a theoretical construct, is nevertheless instantiated to an extent through the prevalence of security cameras as a means of sovereignly regulating human conduct; speeding is an ordinary example. It could even be contended that increasing surveillance according to the model of the Panopticon would reduce the frequency of offences. However, in the final analysis the paper considers that even if adopting an approach based on the Panopticon is a more effective method of policing, it is not necessarily a more desirable one.
Resumo:
To understand the diffusion of high technology products such as PCs, digital cameras and DVD players it is necessary to consider the dynamics of successive generations of technology. From the consumer’s perspective, these technology changes may manifest themselves as either a new generation product substituting for the old (for instance digital cameras) or as multiple generations of a single product (for example PCs). To date, research has been confined to aggregate level sales models. These models consider the demand relationship between one generation of a product and a successor generation. However, they do not give insights into the disaggregate-level decisions by individual households – whether to adopt the newer generation, and if so, when. This paper makes two contributions. It is the first large scale empirical study to collect household data for successive generations of technologies in an effort to understand the drivers of adoption. Second, in contrast to traditional analysis in diffusion research that conceptualizes technology substitution as an “adoption of innovation” type process, we propose that from a consumer’s perspective, technology substitution combines elements of both adoption (adopting the new generation technology) and replacement (replacing generation I product with generation II). Key Propositions In some cases, successive generations are clear “substitutes” for the earlier generation (e.g. PCs Pentium I to II to III ). More commonly the new generation II technology is a “partial substitute” for existing generation I technology (e.g. DVD players and VCRs). Some consumers will purchase generation II products as substitutes for their generation I product, while other consumers will purchase generation II products as additional products to be used as well as their generation I product. We propose that substitute generation II purchases combine elements of both adoption and replacement, but additional generation II purchases are solely adoption-driven process. Moreover, drawing on adoption theory consumer innovativeness is the most important consumer characteristic for adoption timing of new products. Hence, we hypothesize consumer innovativeness to influence the timing of both additional and substitute generation II purchases but to have a stronger impact on additional generation II purchases. We further propose that substitute generation II purchases act partially as a replacement purchase for the generation I product. Thus, we hypothesize that households with older generation I products will make substitute generation II purchases earlier. Methods We employ Cox hazard modeling to study factors influencing the timing of a household’s adoption of generation II products. A separate hazard model is conducted for additional and substitute purchases. The age of the generation I product is calculated based on the most recent household purchase of that product. Control variables include size and income of household, age and education of decision-maker. Results and Implications Our preliminary results confirm both our hypotheses. Consumer innovativeness has a strong influence on both additional purchases and substitute purchases. Also consistent with our hypotheses, the age of the generation I product has a dramatic influence for substitute purchases of VCR/DVD players and a strong influence for PCs/notebooks. Yet, also as hypothesized, there was no influence on additional purchases. This implies that there is a clear distinction between additional and substitute purchases of generation II products, each with different drivers. For substitute purchases, product age is a key driver. Therefore marketers of high technology products can utilize data on generation I product age (e.g. from warranty or loyalty programs) to target customers who are more likely to make a purchase.
Resumo:
Purpose: The cornea is known to be susceptible to forces exerted by eyelids. There have been previous attempts to quantify eyelid pressure but the reliability of the results is unclear. The purpose of this study was to develop a technique using piezoresistive pressure sensors to measure upper eyelid pressure on the cornea. Methods: The technique was based on the use of thin (0.18 mm) tactile piezoresistive pressure sensors, which generate a signal related to the applied pressure. A range of factors that influence the response of this pressure sensor were investigated along with the optimal method of placing the sensor in the eye. Results: Curvature of the pressure sensor was found to impart force, so the sensor needed to remain flat during measurements. A large rigid contact lens was designed to have a flat region to which the sensor was attached. To stabilise the contact lens during measurement, an apparatus was designed to hold and position the sensor and contact lens combination on the eye. A calibration system was designed to apply even pressure to the sensor when attached to the contact lens, so the raw digital output could be converted to actual pressure units. Conclusions: Several novel procedures were developed to use tactile sensors to measure eyelid pressure. The quantification of eyelid pressure has a number of applications including eyelid reconstructive surgery and the design of soft and rigid contact lenses.
Resumo:
The high morbidity and mortality associated with atherosclerotic coronary vascular disease (CVD) and its complications are being lessened by the increased knowledge of risk factors, effective preventative measures and proven therapeutic interventions. However, significant CVD morbidity remains and sudden cardiac death continues to be a presenting feature for some subsequently diagnosed with CVD. Coronary vascular disease is also the leading cause of anaesthesia related complications. Stress electrocardiography/exercise testing is predictive of 10 year risk of CVD events and the cardiovascular variables used to score this test are monitored peri-operatively. Similar physiological time-series datasets are being subjected to data mining methods for the prediction of medical diagnoses and outcomes. This study aims to find predictors of CVD using anaesthesia time-series data and patient risk factor data. Several pre-processing and predictive data mining methods are applied to this data. Physiological time-series data related to anaesthetic procedures are subjected to pre-processing methods for removal of outliers, calculation of moving averages as well as data summarisation and data abstraction methods. Feature selection methods of both wrapper and filter types are applied to derived physiological time-series variable sets alone and to the same variables combined with risk factor variables. The ability of these methods to identify subsets of highly correlated but non-redundant variables is assessed. The major dataset is derived from the entire anaesthesia population and subsets of this population are considered to be at increased anaesthesia risk based on their need for more intensive monitoring (invasive haemodynamic monitoring and additional ECG leads). Because of the unbalanced class distribution in the data, majority class under-sampling and Kappa statistic together with misclassification rate and area under the ROC curve (AUC) are used for evaluation of models generated using different prediction algorithms. The performance based on models derived from feature reduced datasets reveal the filter method, Cfs subset evaluation, to be most consistently effective although Consistency derived subsets tended to slightly increased accuracy but markedly increased complexity. The use of misclassification rate (MR) for model performance evaluation is influenced by class distribution. This could be eliminated by consideration of the AUC or Kappa statistic as well by evaluation of subsets with under-sampled majority class. The noise and outlier removal pre-processing methods produced models with MR ranging from 10.69 to 12.62 with the lowest value being for data from which both outliers and noise were removed (MR 10.69). For the raw time-series dataset, MR is 12.34. Feature selection results in reduction in MR to 9.8 to 10.16 with time segmented summary data (dataset F) MR being 9.8 and raw time-series summary data (dataset A) being 9.92. However, for all time-series only based datasets, the complexity is high. For most pre-processing methods, Cfs could identify a subset of correlated and non-redundant variables from the time-series alone datasets but models derived from these subsets are of one leaf only. MR values are consistent with class distribution in the subset folds evaluated in the n-cross validation method. For models based on Cfs selected time-series derived and risk factor (RF) variables, the MR ranges from 8.83 to 10.36 with dataset RF_A (raw time-series data and RF) being 8.85 and dataset RF_F (time segmented time-series variables and RF) being 9.09. The models based on counts of outliers and counts of data points outside normal range (Dataset RF_E) and derived variables based on time series transformed using Symbolic Aggregate Approximation (SAX) with associated time-series pattern cluster membership (Dataset RF_ G) perform the least well with MR of 10.25 and 10.36 respectively. For coronary vascular disease prediction, nearest neighbour (NNge) and the support vector machine based method, SMO, have the highest MR of 10.1 and 10.28 while logistic regression (LR) and the decision tree (DT) method, J48, have MR of 8.85 and 9.0 respectively. DT rules are most comprehensible and clinically relevant. The predictive accuracy increase achieved by addition of risk factor variables to time-series variable based models is significant. The addition of time-series derived variables to models based on risk factor variables alone is associated with a trend to improved performance. Data mining of feature reduced, anaesthesia time-series variables together with risk factor variables can produce compact and moderately accurate models able to predict coronary vascular disease. Decision tree analysis of time-series data combined with risk factor variables yields rules which are more accurate than models based on time-series data alone. The limited additional value provided by electrocardiographic variables when compared to use of risk factors alone is similar to recent suggestions that exercise electrocardiography (exECG) under standardised conditions has limited additional diagnostic value over risk factor analysis and symptom pattern. The effect of the pre-processing used in this study had limited effect when time-series variables and risk factor variables are used as model input. In the absence of risk factor input, the use of time-series variables after outlier removal and time series variables based on physiological variable values’ being outside the accepted normal range is associated with some improvement in model performance.
Resumo:
Green energy is one of the key factors, driving down electricity bill and zero carbon emission generating electricity to green building. However, the climate change and environmental policies are accelerating people to use renewable energy instead of coal-fired (convention type) energy for green building that energy is not environmental friendly. Therefore, solar energy is one of the clean energy solving environmental impact and paying less in electricity fee. The method of solar energy is collecting sun from solar array and saves in battery from which provides necessary electricity to whole house with zero carbon emission. However, in the market a lot of solar arrays suppliers, the aims of this paper attempted to use superiority and inferiority multi-criteria ranking (SIR) method with 13 constraints establishing I-flows and S-flows matrices to evaluate four alternatives solar energies and determining which alternative is the best, providing power to sustainable building. Furthermore, SIR is well-known structured approach of multi-criteria decision support tools and gradually used in construction and building. The outcome of this paper significantly gives an indication to user selecting solar energy.
Resumo:
Establishing age-at-death for skeletal remains is a vital component of forensic anthropology. The Suchey-Brooks (S-B) method of age estimation has been widely utilised since 1986 and relies on a visual assessment of the pubic symphyseal surface in comparison to a series of casts. Inter-population studies (Kimmerle et al., 2005; Djuric et al., 2007; Sakaue, 2006) demonstrate limitations of the S-B method, however, no assessment of this technique specific to Australian populations has been published. Aim: This investigation assessed the accuracy and applicability of the S-B method to an adult Australian Caucasian population by highlighting error rates associated with this technique. Methods: Computed tomography (CT) and contact scans of the S-B casts were performed; each geometrically modelled surface was extracted and quantified for reference purposes. A Queensland skeletal database for Caucasian remains aged 15 – 70 years was initiated at the Queensland Health Forensic and Scientific Services – Forensic Pathology Mortuary (n=350). Three-dimensional reconstruction of the bone surface using innovative volume visualisation protocols in Amira® and Rapidform® platforms was performed. Samples were allocated into 11 sub-sets of 5-year age intervals and changes associated with the surface geometry were quantified in relation to age, gender and asymmetry. Results: Preliminary results indicate that computational analysis was successfully applied to model morphological surface changes. Significant differences in observed versus actual ages were noted. Furthermore, initial morphological assessment demonstrates significant bilateral asymmetry of the pubic symphysis, which is unaccounted for in the S-B method. These results propose refinements to the S-B method, when applied to Australian casework. Conclusion: This investigation promises to transform anthropological analysis to be more quantitative and less invasive using CT imaging. The overarching goal contributes to improving skeletal identification and medico-legal death investigation in the coronial process by narrowing the range of age-at-death estimation in a biological profile.
Resumo:
A method of producing porous complex oxides includes the steps of providing a mixt. of (a) precursor elements suitable to produce the complex oxide, or (b) one or more precursor elements suitable to produce particles of the complex oxide and one or more metal oxide particles; and (c) a particulate carbon-contg. pore-forming material selected to provide pore sizes in the range of 7-250 nm, and treating the mixt. to (i) form the porous complex oxide in which two or more of the precursor elements from (a) above or one or more of the precursor elements and one or more of the metals in the metal oxide particles from (b) above are incorporated into a phase of the complex metal oxide and the complex metal oxide has grain sizes in the range of 1-150 nm, and (ii) removing the pore-forming material under conditions such that the porous structure and compn. of the complex oxide is substantially preserved. The method may be used to produce nonrefractory metal oxides as well. The mixt. further includes a surfactant, or a polymer. [on SciFinder(R)]
Resumo:
Reported homocysteine (HCY) concentrations in human serum show poor concordance amongst laboratories due to endogenous HCY in the matrices used for assay calibrators and QCs. Hence, we have developed a fully validated LC–MS/MS method for measurement of HCY concentrations in human serum samples that addresses this issue by minimising matrix effects. We used small volumes (20 μL) of 2% Bovine Serum Albumin (BSA) as surrogate matrix for making calibrators and QCs with concentrations adjusted for the endogenous HCY concentration in the surrogate matrix using the method of standard additions. To aliquots (20 μL) of human serum samples, calibrators or QCs, were added HCY-d4 (internal standard) and tris-(2-carboxyethyl) phosphine hydrochloride (TCEP) as reducing agent. After protein precipitation, diluted supernatants were injected into the LC–MS/MS. Calibration curves were linear; QCs were accurate (5.6% deviation from nominal), precise (CV% ≤ 9.6%), stable for four freeze–thaw cycles, and when stored at room temperature for 5 h or at −80 °C (27 days). Recoveries from QCs in surrogate matrix or pooled human serum were 91.9 and 95.9%, respectively. There was no matrix effect using 6 different individual serum samples including one that was haemolysed. Our LC–MS/MS method has satisfied all of the validation criteria of the 2012 EMA guideline.
Resumo:
Porosity is one of the key parameters of the macroscopic structure of porous media, generally defined as the ratio of the free spaces occupied (by the volume of air) within the material to the total volume of the material. Porosity is determined by measuring skeletal volume and the envelope volume. Solid displacement method is one of the inexpensive and easy methods to determine the envelope volume of a sample with an irregular shape. In this method, generally glass beads are used as a solid due to their uniform size, compactness and fluidity properties. The smaller size of the glass beads means that they enter into the open pores which have a larger diameter than the glass beads. Although extensive research has been carried out on porosity determination using displacement method, no study exists which adequately reports micro-level observation of the sample during measurement. This study set out with the aim of assessing the accuracy of solid displacement method of bulk density measurement of dried foods by micro-level observation. Solid displacement method of porosity determination was conducted using a cylindrical vial (cylindrical plastic container) and 57 µm glass beads in order to measure the bulk density of apple slices at different moisture contents. A scanning electron microscope (SEM), a profilometer and ImageJ software were used to investigate the penetration of glass beads into the surface pores during the determination of the porosity of dried food. A helium pycnometer was used to measure the particle density of the sample. Results show that a significant number of pores were large enough to allow the glass beads to enter into the pores, thereby causing some erroneous results. It was also found that coating the dried sample with appropriate coating material prior to measurement can resolve this problem.