467 resultados para Prediction techniques
Resumo:
In this paper, spatially offset Raman spectroscopy (SORS) is demonstrated for non-invasively investigating the composition of drug mixtures inside an opaque plastic container. The mixtures consisted of three components including a target drug (acetaminophen or phenylephrine hydrochloride) and two diluents (glucose and caffeine). The target drug concentrations ranged from 5% to 100%. After conducting SORS analysis to ascertain the Raman spectra of the concealed mixtures, principal component analysis (PCA) was performed on the SORS spectra to reveal trends within the data. Partial least squares (PLS) regression was used to construct models that predicted the concentration of each target drug, in the presence of the other two diluents. The PLS models were able to predict the concentration of acetaminophen in the validation samples with a root-mean-square error of prediction (RMSEP) of 3.8% and the concentration of phenylephrine hydrochloride with an RMSEP of 4.6%. This work demonstrates the potential of SORS, used in conjunction with multivariate statistical techniques, to perform non-invasive, quantitative analysis on mixtures inside opaque containers. This has applications for pharmaceutical analysis, such as monitoring the degradation of pharmaceutical products on the shelf, in forensic investigations of counterfeit drugs, and for the analysis of illicit drug mixtures which may contain multiple components.
Resumo:
Finite element analyses of the human body in seated postures requires digital models capable of providing accurate and precise prediction of the tissue-level response of the body in the seated posture. To achieve such models, the human anatomy must be represented with high fidelity. This information can readily be defined using medical imaging techniques such as Magnetic Resonance Imaging (MRI) or Computed Tomography (CT). Current practices for constructing digital human models, based on the magnetic resonance (MR) images, in a lying down (supine) posture have reduced the error in the geometric representation of human anatomy relative to reconstructions based on data from cadaveric studies. Nonetheless, the significant differences between seated and supine postures in segment orientation, soft-tissue deformation and soft tissue strain create a need for data obtained in postures more similar to the application posture. In this study, we present a novel method for creating digital human models based on seated MR data. An adult-male volunteer was scanned in a simulated driving posture using a FONAR 0.6T upright MRI scanner with a T1 scanning protocol. To compensate for unavoidable image distortion near the edges of the study, images of the same anatomical structures were obtained in transverse and sagittal planes. Combinations of transverse and sagittal images were used to reconstruct the major anatomical features from the buttocks through the knees, including bone, muscle and fat tissue perimeters, using Solidworks® software. For each MR image, B-splines were created as contours for the anatomical structures of interest, and LOFT commands were used to interpolate between the generated Bsplines. The reconstruction of the pelvis, from MR data, was enhanced by the use of a template model generated in previous work CT images. A non-rigid registration algorithm was used to fit the pelvis template into the MR data. Additionally, MR image processing was conducted to both the left and the right sides of the model due to the intended asymmetric posture of the volunteer during the MR measurements. The presented subject-specific, three-dimensional model of the buttocks and thighs will add value to optimisation cycles in automotive seat development when used in simulating human interaction with automotive seats.
Resumo:
Background: Access to cardiac services is essential for appropriate implementation of evidence-based therapies to improve outcomes. The Cardiac Accessibility and Remoteness Index for Australia (Cardiac ARIA) aimed to derive an objective, geographic measure reflecting access to cardiac services. Methods: An expert panel defined an evidence-based clinical pathway. Using Geographic Information Systems (GIS), a numeric/alpha index was developed at two points along the continuum of care. The acute category (numeric) measured the time from the emergency call to arrival at an appropriate medical facility via road ambulance. The aftercare category (alpha) measured access to four basic services (family doctor, pharmacy, cardiac rehabilitation, and pathology services) when a patient returned to their community. Results: The numeric index ranged from 1 (access to principle referral center with cardiac catheterization service ≤ 1 hour) to 8 (no ambulance service, > 3 hours to medical facility, air transport required). The alphabetic index ranged from A (all 4 services available within 1 hour drive-time) to E (no services available within 1 hour). 13.9 million (71%) Australians resided within Cardiac ARIA 1A locations (hospital with cardiac catheterization laboratory and all aftercare within 1 hour). Those outside Cardiac 1A were over-represented by people aged over 65 years (32%) and Indigenous people (60%). Conclusion: The Cardiac ARIA index demonstrated substantial inequity in access to cardiac services in Australia. This methodology can be used to inform cardiology health service planning and the methodology could be applied to other common disease states within other regions of the world.
Resumo:
Accurate reliability prediction for large-scale, long lived engineering is a crucial foundation for effective asset risk management and optimal maintenance decision making. However, a lack of failure data for assets that fail infrequently, and changing operational conditions over long periods of time, make accurate reliability prediction for such assets very challenging. To address this issue, we present a Bayesian-Marko best approach to reliability prediction using prior knowledge and condition monitoring data. In this approach, the Bayesian theory is used to incorporate prior information about failure probabilities and current information about asset health to make statistical inferences, while Markov chains are used to update and predict the health of assets based on condition monitoring data. The prior information can be supplied by domain experts, extracted from previous comparable cases or derived from basic engineering principles. Our approach differs from existing hybrid Bayesian models which are normally used to update the parameter estimation of a given distribution such as the Weibull-Bayesian distribution or the transition probabilities of a Markov chain. Instead, our new approach can be used to update predictions of failure probabilities when failure data are sparse or nonexistent, as is often the case for large-scale long-lived engineering assets.
Resumo:
This paper demonstrates an experimental study that examines the accuracy of various information retrieval techniques for Web service discovery. The main goal of this research is to evaluate algorithms for semantic web service discovery. The evaluation is comprehensively benchmarked using more than 1,700 real-world WSDL documents from INEX 2010 Web Service Discovery Track dataset. For automatic search, we successfully use Latent Semantic Analysis and BM25 to perform Web service discovery. Moreover, we provide linking analysis which automatically links possible atomic Web services to meet the complex requirements of users. Our fusion engine recommends a final result to users. Our experiments show that linking analysis can improve the overall performance of Web service discovery. We also find that keyword-based search can quickly return results but it has limitation of understanding users’ goals.
Resumo:
Traffic generated semi and non volatile organic compounds (SVOCs and NVOCs) pose a serious threat to human and ecosystem health when washed off into receiving water bodies by stormwater. Climate change influenced rainfall characteristics makes the estimation of these pollutants in stormwater quite complex. The research study discussed in the paper developed a prediction framework for such pollutants under the dynamic influence of climate change on rainfall characteristics. It was established through principal component analysis (PCA) that the intensity and durations of low to moderate rain events induced by climate change mainly affect the wash-off of SVOCs and NVOCs from urban roads. The study outcomes were able to overcome the limitations of stringent laboratory preparation of calibration matrices by extracting uncorrelated underlying factors in the data matrices through systematic application of PCA and factor analysis (FA). Based on the initial findings from PCA and FA, the framework incorporated orthogonal rotatable central composite experimental design to set up calibration matrices and partial least square regression to identify significant variables in predicting the target SVOCs and NVOCs in four particulate fractions ranging from >300-1 μm and one dissolved fraction of <1 μm. For the particulate fractions range >300-1 μm, similar distributions of predicted and observed concentrations of the target compounds from minimum to 75th percentile were achieved. The inter-event coefficient of variations for particulate fractions of >300-1 μm were 5% to 25%. The limited solubility of the target compounds in stormwater restricted the predictive capacity of the proposed method for the dissolved fraction of <1 μm.
Resumo:
This paper introduces the Weighted Linear Discriminant Analysis (WLDA) technique, based upon the weighted pairwise Fisher criterion, for the purposes of improving i-vector speaker verification in the presence of high intersession variability. By taking advantage of the speaker discriminative information that is available in the distances between pairs of speakers clustered in the development i-vector space, the WLDA technique is shown to provide an improvement in speaker verification performance over traditional Linear Discriminant Analysis (LDA) approaches. A similar approach is also taken to extend the recently developed Source Normalised LDA (SNLDA) into Weighted SNLDA (WSNLDA) which, similarly, shows an improvement in speaker verification performance in both matched and mismatched enrolment/verification conditions. Based upon the results presented within this paper using the NIST 2008 Speaker Recognition Evaluation dataset, we believe that both WLDA and WSNLDA are viable as replacement techniques to improve the performance of LDA and SNLDA-based i-vector speaker verification.
Resumo:
This paper presents the benefits and issues related to travel time prediction on urban network. Travel time information quantifies congestion and is perhaps the most important network performance measure. Travel time prediction has been an active area of research for the last five decades. The activities related to ITS have increased the attention of researchers for better and accurate real-time prediction of travel time. Majority of the literature on travel time prediction is applicable to freeways where, under non-incident conditions, traffic flow is not affected by external factors such as traffic control signals and opposing traffic flows. On urban environment the problem is more complicated due to conflicting areas (intersections), mid-link sources and sinks etc. and needs to be addressed.
Resumo:
For the further noise reduction in the future, the traffic management which controls traffic flow and physical distribution is important. To conduct the measure by the traffic management effectively, it is necessary to apply the model for predicting the traffic flow in the citywide road network. For this purpose, the existing model named AVENUE was used as a macro-traffic flow prediction model. The traffic flow model was integrated with the road vehicles' sound power model, and the new road traffic noise prediction model was established. By using this prediction model, the noise map of entire city can be made. In this study, first, the change of traffic flow on the road network after the establishment of new roads was estimated, and the change of the road traffic noise caused by the new roads was predicted. As a result, it has been found that this prediction model has the ability to estimate the change of noise map by the traffic management. In addition, the macro-traffic flow model and our conventional micro-traffic flow model were combined, and the coverage of the noise prediction model was expanded.
Resumo:
Commonwealth Scientific and Industrial Research Organization (CSIRO) has recently conducted a technology demonstration of a novel fixed wireless broadband access system in rural Australia. The system is based on multi user multiple-input multiple-output orthogonal frequency division multiplexing (MU-MIMO-OFDM). It demonstrated an uplink of six simultaneous users with distances ranging from 10 m to 8.5 km from a central tower, achieving 20 bits s/Hz spectrum efficiency. This paper reports on the analysis of channel capacity and bit error probability simulation based on the measured MUMIMO-OFDM channels obtained during the demonstration, and their comparison with the results based on channels simulated by a novel geometric optics based channel model suitable for MU-MIMO OFDM in rural areas. Despite its simplicity, the model was found to predict channel capacity and bit error rate probability accurately for a typical MU-MIMO-OFDM deployment scenario.
Resumo:
EMR (Electronic Medical Record) is an emerging technology that is highly-blended between non-IT and IT area. One methodology is to link the non-IT and IT area is to construct databases. Nowadays, it supports before and after-treatment for patients and should satisfy all stakeholders such as practitioners, nurses, researchers, administrators and financial departments and so on. In accordance with the database maintenance, DAS (Data as Service) model is one solution for outsourcing. However, there are some scalability and strategy issues when we need to plan to use DAS model properly. We constructed three kinds of databases such as plan-text, MS built-in encryption which is in-house model and custom AES (Advanced Encryption Standard) - DAS model scaling from 5K to 2560K records. To perform custom AES-DAS better, we also devised Bucket Index using Bloom Filter. The simulation showed the response times arithmetically increased in the beginning but after a certain threshold, exponentially increased in the end. In conclusion, if the database model is close to in-house model, then vendor technology is a good way to perform and get query response times in a consistent manner. If the model is DAS model, it is easy to outsource the database, however, some techniques like Bucket Index enhances its utilization. To get faster query response times, designing database such as consideration of the field type is also important. This study suggests cloud computing would be a next DAS model to satisfy the scalability and the security issues.
Resumo:
Electronic Health Record (EHR) retrieval processes are complex demanding Information Technology (IT) resources exponentially in particular memory usage. Database-as-a-service (DAS) model approach is proposed to meet the scalability factor of EHR retrieval processes. A simulation study using ranged of EHR records with DAS model was presented. The bucket-indexing model incorporated partitioning fields and bloom filters in a Singleton design pattern were used to implement custom database encryption system. It effectively provided faster responses in the range query compared to different types of queries used such as aggregation queries among the DAS, built-in encryption and the plain-text DBMS. The study also presented with constraints around the approach should consider for other practical applications.
Resumo:
The opening phrase of the title is from Charles Darwin’s notebooks (Schweber 1977). It is a double reminder, firstly that mainstream evolutionary theory is not just about describing nature but is particularly looking for mechanisms or ‘causes’, and secondly, that there will usually be several causes affecting any particular outcome. The second part of the title is our concern at the almost universal rejection of the idea that biological mechanisms are sufficient for macroevolutionary changes, thus rejecting a cornerstone of Darwinian evolutionary theory. Our primary aim here is to consider ways of making it easier to develop and to test hypotheses about evolution. Formalizing hypotheses can help generate tests. In an absolute sense, some of the discussion by scientists about evolution is little better than the lack of reasoning used by those advocating intelligent design. Our discussion here is in a Popperian framework where science is defined by that area of study where it is possible, in principle, to find evidence against hypotheses – they are in principle falsifiable. However, with time, the boundaries of science keep expanding. In the past, some aspects of evolution were outside the current boundaries of falsifiable science, but increasingly new techniques and ideas are expanding the boundaries of science and it is appropriate to re-examine some topics. It often appears that over the last few decades there has been an increasingly strong assumption to look first (and only) for a physical cause. This decision is virtually never formally discussed, just an assumption is made that some physical factor ‘drives’ evolution. It is necessary to examine our assumptions much more carefully. What is meant by physical factors ‘driving’ evolution, or what is an ‘explosive radiation’. Our discussion focuses on two of the six mass extinctions, the fifth being events in the Late Cretaceous, and the sixth starting at least 50,000 years ago (and is ongoing). Cretaceous/Tertiary boundary; the rise of birds and mammals. We have had a long-term interest (Cooper and Penny 1997) in designing tests to help evaluate whether the processes of microevolution are sufficient to explain macroevolution. The real challenge is to formulate hypotheses in a testable way. For example the numbers of lineages of birds and mammals that survive from the Cretaceous to the present is one test. Our first estimate was 22 for birds, and current work is tending to increase this value. This still does not consider lineages that survived into the Tertiary, and then went extinct later. Our initial suggestion was probably too narrow in that it lumped four models from Penny and Phillips (2004) into one model. This reduction is too simplistic in that we need to know about survival and ecological and morphological divergences during the Late Cretaceous, and whether Crown groups of avian or mammalian orders may have existed back into the Cretaceous. More recently (Penny and Phillips 2004) we have formalized hypotheses about dinosaurs and pterosaurs, with the prediction that interactions between mammals (and groundfeeding birds) and dinosaurs would be most likely to affect the smallest dinosaurs, and similarly interactions between birds and pterosaurs would particularly affect the smaller pterosaurs. There is now evidence for both classes of interactions, with the smallest dinosaurs and pterosaurs declining first, as predicted. Thus, testable models are now possible. Mass extinction number six: human impacts. On a broad scale, there is a good correlation between time of human arrival, and increased extinctions (Hurles et al. 2003; Martin 2005; Figure 1). However, it is necessary to distinguish different time scales (Penny 2005) and on a finer scale there are still large numbers of possibilities. In Hurles et al. (2003) we mentioned habitat modification (including the use of Geogenes III July 2006 31 fire), introduced plants and animals (including kiore) in addition to direct predation (the ‘overkill’ hypothesis). We need also to consider prey switching that occurs in early human societies, as evidenced by the results of Wragg (1995) on the middens of different ages on Henderson Island in the Pitcairn group. In addition, the presence of human-wary or humanadapted animals will affect the distribution in the subfossil record. A better understanding of human impacts world-wide, in conjunction with pre-scientific knowledge will make it easier to discuss the issues by removing ‘blame’. While continued spontaneous generation was accepted universally, there was the expectation that animals continued to reappear. New Zealand is one of the very best locations in the world to study many of these issues. Apart from the marine fossil record, some human impact events are extremely recent and the remains less disrupted by time.
Resumo:
Acoustic emission (AE) analysis is one of the several diagnostic techniques available nowadays for structural health monitoring (SHM) of engineering structures. Some of its advantages over other techniques include high sensitivity to crack growth and capability of monitoring a structure in real time. The phenomenon of rapid release of energy within a material by crack initiation or growth in form of stress waves is known as acoustic emission (AE). In AE technique, these stress waves are recorded by means of suitable sensors placed on the surface of a structure. Recorded signals are subsequently analysed to gather information about the nature of the source. By enabling early detection of crack growth, AE technique helps in planning timely retrofitting or other maintenance jobs or even replacement of the structure if required. In spite of being a promising tool, some challenges do still exist behind the successful application of AE technique. Large amount of data is generated during AE testing, hence effective data analysis is necessary, especially for long term monitoring uses. Appropriate analysis of AE data for quantification of damage level is an area that has received considerable attention. Various approaches available for damage quantification for severity assessment are discussed in this paper, with special focus on civil infrastructure such as bridges. One method called improved b-value analysis is used to analyse data collected from laboratory testing.