899 resultados para Models performance
Resumo:
Background: Internationally, tests of general mental ability are used in the selection of medical students. Examples include the Medical College Admission Test, Undergraduate Medicine and Health Sciences Admission Test and the UK Clinical Aptitude Test. The most widely used measure of their efficacy is predictive validity.A new tool, the Health Professions Admission Test- Ireland (HPAT-Ireland), was introduced in 2009. Traditionally, selection to Irish undergraduate medical schools relied on academic achievement. Since 2009, Irish and EU applicants are selected on a combination of their secondary school academic record (measured predominately by the Leaving Certificate Examination) and HPAT-Ireland score. This is the first study to report on the predictive validity of the HPAT-Ireland for early undergraduate assessments of communication and clinical skills. Method. Students enrolled at two Irish medical schools in 2009 were followed up for two years. Data collected were gender, HPAT-Ireland total and subsection scores; Leaving Certificate Examination plus HPAT-Ireland combined score, Year 1 Objective Structured Clinical Examination (OSCE) scores (Total score, communication and clinical subtest scores), Year 1 Multiple Choice Questions and Year 2 OSCE and subset scores. We report descriptive statistics, Pearson correlation coefficients and Multiple linear regression models. Results: Data were available for 312 students. In Year 1 none of the selection criteria were significantly related to student OSCE performance. The Leaving Certificate Examination and Leaving Certificate plus HPAT-Ireland combined scores correlated with MCQ marks.In Year 2 a series of significant correlations emerged between the HPAT-Ireland and subsections thereof with OSCE Communication Z-scores; OSCE Clinical Z-scores; and Total OSCE Z-scores. However on multiple regression only the relationship between Total OSCE Score and the Total HPAT-Ireland score remained significant; albeit the predictive power was modest. Conclusion: We found that none of our selection criteria strongly predict clinical and communication skills. The HPAT- Ireland appears to measures ability in domains different to those assessed by the Leaving Certificate Examination. While some significant associations did emerge in Year 2 between HPAT Ireland and total OSCE scores further evaluation is required to establish if this pattern continues during the senior years of the medical course.
Resumo:
An abstract of a thesis devoted to using helix-coil models to study unfolded states.\\
Research on polypeptide unfolded states has received much more attention in the last decade or so than it has in the past. Unfolded states are thought to be implicated in various
misfolding diseases and likely play crucial roles in protein folding equilibria and folding rates. Structural characterization of unfolded states has proven to be
much more difficult than the now well established practice of determining the structures of folded proteins. This is largely because many core assumptions underlying
folded structure determination methods are invalid for unfolded states. This has led to a dearth of knowledge concerning the nature of unfolded state conformational
distributions. While many aspects of unfolded state structure are not well known, there does exist a significant body of work stretching back half a century that
has been focused on structural characterization of marginally stable polypeptide systems. This body of work represents an extensive collection of experimental
data and biophysical models associated with describing helix-coil equilibria in polypeptide systems. Much of the work on unfolded states in the last decade has not been devoted
specifically to the improvement of our understanding of helix-coil equilibria, which arguably is the most well characterized of the various conformational equilibria
that likely contribute to unfolded state conformational distributions. This thesis seeks to provide a deeper investigation of helix-coil equilibria using modern
statistical data analysis and biophysical modeling techniques. The studies contained within seek to provide deeper insights and new perspectives on what we presumably
know very well about protein unfolded states. \\
Chapter 1 gives an overview of recent and historical work on studying protein unfolded states. The study of helix-coil equilibria is placed in the context
of the general field of unfolded state research and the basics of helix-coil models are introduced.\\
Chapter 2 introduces the newest incarnation of a sophisticated helix-coil model. State of the art modern statistical techniques are employed to estimate the energies
of various physical interactions that serve to influence helix-coil equilibria. A new Bayesian model selection approach is utilized to test many long-standing
hypotheses concerning the physical nature of the helix-coil transition. Some assumptions made in previous models are shown to be invalid and the new model
exhibits greatly improved predictive performance relative to its predecessor. \\
Chapter 3 introduces a new statistical model that can be used to interpret amide exchange measurements. As amide exchange can serve as a probe for residue-specific
properties of helix-coil ensembles, the new model provides a novel and robust method to use these types of measurements to characterize helix-coil ensembles experimentally
and test the position-specific predictions of helix-coil models. The statistical model is shown to perform exceedingly better than the most commonly used
method for interpreting amide exchange data. The estimates of the model obtained from amide exchange measurements on an example helical peptide
also show a remarkable consistency with the predictions of the helix-coil model. \\
Chapter 4 involves a study of helix-coil ensembles through the enumeration of helix-coil configurations. Aside from providing new insights into helix-coil ensembles,
this chapter also introduces a new method by which helix-coil models can be extended to calculate new types of observables. Future work on this approach could potentially
allow helix-coil models to move into use domains that were previously inaccessible and reserved for other types of unfolded state models that were introduced in chapter 1.
Resumo:
X-ray computed tomography (CT) is a non-invasive medical imaging technique that generates cross-sectional images by acquiring attenuation-based projection measurements at multiple angles. Since its first introduction in the 1970s, substantial technical improvements have led to the expanding use of CT in clinical examinations. CT has become an indispensable imaging modality for the diagnosis of a wide array of diseases in both pediatric and adult populations [1, 2]. Currently, approximately 272 million CT examinations are performed annually worldwide, with nearly 85 million of these in the United States alone [3]. Although this trend has decelerated in recent years, CT usage is still expected to increase mainly due to advanced technologies such as multi-energy [4], photon counting [5], and cone-beam CT [6].
Despite the significant clinical benefits, concerns have been raised regarding the population-based radiation dose associated with CT examinations [7]. From 1980 to 2006, the effective dose from medical diagnostic procedures rose six-fold, with CT contributing to almost half of the total dose from medical exposure [8]. For each patient, the risk associated with a single CT examination is likely to be minimal. However, the relatively large population-based radiation level has led to enormous efforts among the community to manage and optimize the CT dose.
As promoted by the international campaigns Image Gently and Image Wisely, exposure to CT radiation should be appropriate and safe [9, 10]. It is thus a responsibility to optimize the amount of radiation dose for CT examinations. The key for dose optimization is to determine the minimum amount of radiation dose that achieves the targeted image quality [11]. Based on such principle, dose optimization would significantly benefit from effective metrics to characterize radiation dose and image quality for a CT exam. Moreover, if accurate predictions of the radiation dose and image quality were possible before the initiation of the exam, it would be feasible to personalize it by adjusting the scanning parameters to achieve a desired level of image quality. The purpose of this thesis is to design and validate models to quantify patient-specific radiation dose prospectively and task-based image quality. The dual aim of the study is to implement the theoretical models into clinical practice by developing an organ-based dose monitoring system and an image-based noise addition software for protocol optimization.
More specifically, Chapter 3 aims to develop an organ dose-prediction method for CT examinations of the body under constant tube current condition. The study effectively modeled the anatomical diversity and complexity using a large number of patient models with representative age, size, and gender distribution. The dependence of organ dose coefficients on patient size and scanner models was further evaluated. Distinct from prior work, these studies use the largest number of patient models to date with representative age, weight percentile, and body mass index (BMI) range.
With effective quantification of organ dose under constant tube current condition, Chapter 4 aims to extend the organ dose prediction system to tube current modulated (TCM) CT examinations. The prediction, applied to chest and abdominopelvic exams, was achieved by combining a convolution-based estimation technique that quantifies the radiation field, a TCM scheme that emulates modulation profiles from major CT vendors, and a library of computational phantoms with representative sizes, ages, and genders. The prospective quantification model is validated by comparing the predicted organ dose with the dose estimated based on Monte Carlo simulations with TCM function explicitly modeled.
Chapter 5 aims to implement the organ dose-estimation framework in clinical practice to develop an organ dose-monitoring program based on a commercial software (Dose Watch, GE Healthcare, Waukesha, WI). In the first phase of the study we focused on body CT examinations, and so the patient’s major body landmark information was extracted from the patient scout image in order to match clinical patients against a computational phantom in the library. The organ dose coefficients were estimated based on CT protocol and patient size as reported in Chapter 3. The exam CTDIvol, DLP, and TCM profiles were extracted and used to quantify the radiation field using the convolution technique proposed in Chapter 4.
With effective methods to predict and monitor organ dose, Chapters 6 aims to develop and validate improved measurement techniques for image quality assessment. Chapter 6 outlines the method that was developed to assess and predict quantum noise in clinical body CT images. Compared with previous phantom-based studies, this study accurately assessed the quantum noise in clinical images and further validated the correspondence between phantom-based measurements and the expected clinical image quality as a function of patient size and scanner attributes.
Chapter 7 aims to develop a practical strategy to generate hybrid CT images and assess the impact of dose reduction on diagnostic confidence for the diagnosis of acute pancreatitis. The general strategy is (1) to simulate synthetic CT images at multiple reduced-dose levels from clinical datasets using an image-based noise addition technique; (2) to develop quantitative and observer-based methods to validate the realism of simulated low-dose images; (3) to perform multi-reader observer studies on the low-dose image series to assess the impact of dose reduction on the diagnostic confidence for multiple diagnostic tasks; and (4) to determine the dose operating point for clinical CT examinations based on the minimum diagnostic performance to achieve protocol optimization.
Chapter 8 concludes the thesis with a summary of accomplished work and a discussion about future research.
Resumo:
The work presented in this dissertation is focused on applying engineering methods to develop and explore probabilistic survival models for the prediction of decompression sickness in US NAVY divers. Mathematical modeling, computational model development, and numerical optimization techniques were employed to formulate and evaluate the predictive quality of models fitted to empirical data. In Chapters 1 and 2 we present general background information relevant to the development of probabilistic models applied to predicting the incidence of decompression sickness. The remainder of the dissertation introduces techniques developed in an effort to improve the predictive quality of probabilistic decompression models and to reduce the difficulty of model parameter optimization.
The first project explored seventeen variations of the hazard function using a well-perfused parallel compartment model. Models were parametrically optimized using the maximum likelihood technique. Model performance was evaluated using both classical statistical methods and model selection techniques based on information theory. Optimized model parameters were overall similar to those of previously published Results indicated that a novel hazard function definition that included both ambient pressure scaling and individually fitted compartment exponent scaling terms.
We developed ten pharmacokinetic compartmental models that included explicit delay mechanics to determine if predictive quality could be improved through the inclusion of material transfer lags. A fitted discrete delay parameter augmented the inflow to the compartment systems from the environment. Based on the observation that symptoms are often reported after risk accumulation begins for many of our models, we hypothesized that the inclusion of delays might improve correlation between the model predictions and observed data. Model selection techniques identified two models as having the best overall performance, but comparison to the best performing model without delay and model selection using our best identified no delay pharmacokinetic model both indicated that the delay mechanism was not statistically justified and did not substantially improve model predictions.
Our final investigation explored parameter bounding techniques to identify parameter regions for which statistical model failure will not occur. When a model predicts a no probability of a diver experiencing decompression sickness for an exposure that is known to produce symptoms, statistical model failure occurs. Using a metric related to the instantaneous risk, we successfully identify regions where model failure will not occur and identify the boundaries of the region using a root bounding technique. Several models are used to demonstrate the techniques, which may be employed to reduce the difficulty of model optimization for future investigations.
Resumo:
Greenhouses have become an invaluable source of year-round food production. Further development of viable and efficient high performance greenhouses is important for future food security. Closing the greenhouse envelope from the environment can provide benefits in space heating energy savings, pest control, and CO2 enrichment. This requires the application of a novel air conditioning system to handle the high cooling loads experienced by a greenhouse. Liquid desiccant air-conditioning (LDAC) have been found to provide high latent cooling capacities, which is perfect for the application of a humid greenhouse microclimate. TRNSYS simulations were undertaken to study the feasibility of two liquid desiccant dehumidification systems based on their capacity to control the greenhouse microclimate, and their cooling performance. The base model (B-LDAC) included a natural gas boiler, and two cooling systems for seasonal operation. The second model (HP-LDAC) was a hybrid liquid desiccant-heat pump dehumidification system. The average tCOPdehum and tCOPtotal of the B-LDAC system increased from 0.40 and 0.56 in January to 0.94 and 1.09 in June. Increased load and performance during a sample summer day improved these values to 3.5 and 3.0, respectively. The average eCOPdehum and eCOPtotal values were 1.0 and 1.8 in winter, and 1.7 and 2.1 in summer. The HP-LDAC system produced similar daily performance trends where the annual average eCOPdehum and eCOPtotal values were 1.3 and 1.2, but the sample day saw peaks of 2.4 and 3.2, respectively. The B-LDAC and HP-LDAC results predicted greenhouse temperatures exceeding 30°C for 34% and 17% of the month of July, respectively. Similarly, humidity levels increased in summer months, with a maximum of 14% of the time spent over 80% in May for both models. The percentage of annual savings in space heating energy associated with closing the greenhouse to ventilation was 34%. The additional annual regeneration energy input was reduced by 26% to 526 kWhm-2, with the implementation of a heat recovery ventilator on the regeneration exhaust air. The models also predicted an electrical energy input of 245 kWhm-2 and 305 kWhm-2 for the B-LDAC and HP-LDAC simulations, respectively.
Resumo:
In establishing the reliability of performance-related design methods for concrete – which are relevant for resistance against chloride-induced corrosion - long-term experience of local materials and practices and detailed knowledge of the ambient and local micro-climate are critical. Furthermore, in the development of analytical models for performance-based design, calibration against test data representative of actual conditions in practice is required. To this end, the current study presents results from full-scale, concrete pier-stems under long-term exposure to a marine environment with work focussing on XS2 (below mid-tide level) in which the concrete is regarded as fully saturated and XS3 (tidal, splash and spray) in which the concrete is in an unsaturated condition. These exposures represent zones where concrete structures are most susceptible to ionic ingress and deterioration. Chloride profiles and chloride transport behaviour are studied using both an empirical model (erfc function) and a physical model (ClinConc). The time dependency of surface chloride concentration (Cs) and apparent diffusivity (Da) were established for the empirical model whereas, in the ClinConc model (originally based on saturated concrete), two new environmental factors were introduced for the XS3 environmental exposure zone. Although the XS3 is considered as one environmental exposure zone according to BS EN 206-1:2013, the work has highlighted that even within this zone, significant changes in chloride ingress are evident. This study aims to update the parameters of both models for predicting the long term transport behaviour of concrete subjected to environmental exposure classes XS2 and XS3.
Resumo:
This paper describes an implementation of a method capable of integrating parametric, feature based, CAD models based on commercial software (CATIA) with the SU2 software framework. To exploit the adjoint based methods for aerodynamic optimisation within the SU2, a formulation to obtain geometric sensitivities directly from the commercial CAD parameterisation is introduced, enabling the calculation of gradients with respect to CAD based design variables. To assess the accuracy and efficiency of the alternative approach, two aerodynamic optimisation problems are investigated: an inviscid, 3D, problem with multiple constraints, and a 2D high-lift aerofoil, viscous problem without any constraints. Initial results show the new parameterisation obtaining reliable optimums, with similar levels of performance of the software native parameterisations. In the final paper, details of computing CAD sensitivities will be provided, including accuracy as well as linking geometric sensitivities to aerodynamic objective functions and constraints; the impact in the robustness of the overall method will be assessed and alternative parameterisations will be included.
Resumo:
Ground-source heat pump (GSHP) systems represent one of the most promising techniques for heating and cooling in buildings. These systems use the ground as a heat source/sink, allowing a better efficiency thanks to the low variations of the ground temperature along the seasons. The ground-source heat exchanger (GSHE) then becomes a key component for optimizing the overall performance of the system. Moreover, the short-term response related to the dynamic behaviour of the GSHE is a crucial aspect, especially from a regulation criteria perspective in on/off controlled GSHP systems. In this context, a novel numerical GSHE model has been developed at the Instituto de Ingeniería Energética, Universitat Politècnica de València. Based on the decoupling of the short-term and the long-term response of the GSHE, the novel model allows the use of faster and more precise models on both sides. In particular, the short-term model considered is the B2G model, developed and validated in previous research works conducted at the Instituto de Ingeniería Energética. For the long-term, the g-function model was selected, since it is a previously validated and widely used model, and presents some interesting features that are useful for its combination with the B2G model. The aim of the present paper is to describe the procedure of combining these two models in order to obtain a unique complete GSHE model for both short- and long-term simulation. The resulting model is then validated against experimental data from a real GSHP installation.
Resumo:
There has been an increasing interest in the development of new methods using Pareto optimality to deal with multi-objective criteria (for example, accuracy and time complexity). Once one has developed an approach to a problem of interest, the problem is then how to compare it with the state of art. In machine learning, algorithms are typically evaluated by comparing their performance on different data sets by means of statistical tests. Standard tests used for this purpose are able to consider jointly neither performance measures nor multiple competitors at once. The aim of this paper is to resolve these issues by developing statistical procedures that are able to account for multiple competing measures at the same time and to compare multiple algorithms altogether. In particular, we develop two tests: a frequentist procedure based on the generalized likelihood-ratio test and a Bayesian procedure based on a multinomial-Dirichlet conjugate model. We further extend them by discovering conditional independences among measures to reduce the number of parameters of such models, as usually the number of studied cases is very reduced in such comparisons. Data from a comparison among general purpose classifiers is used to show a practical application of our tests.
Resumo:
The integral variability of raw materials, lack of awareness and appreciation of the technologies for achieving quality control and lack of appreciation of the micro and macro environmental conditions that the structures will be subjected, makes modern day concreting a challenge. This also makes Designers and Engineers adhere more closely to prescriptive standards developed for relatively less aggressive environments. The data from exposure sites and real structures prove, categorically, that the prescriptive specifications are inadequate for chloride environments. In light of this shortcoming, a more pragmatic approach would be to adopt performance-based specifications which are familiar to industry in the form of specification for mechanical strength. A recently completed RILEM technical committee made significant advances in making such an approach feasible.
Furthering a performance-based specification requires establishment of reliable laboratory and on-site test methods, as well as easy to perform service-life models. This article highlights both laboratory and on-site test methods for chloride diffusivity/electrical resistivity and the relationship between these tests for a range of concretes. Further, a performance-based approach using an on-site diffusivity test is outlined that can provide an easier to apply/adopt practice for Engineers and asset managers for specifying/testing concrete structures.
Resumo:
Literature describing the notion and practice of business models has grown considerably over the last few years. Innovative business models appear in every sector of the economy challenging traditional ways of creating and capturing value. However, research describing the theoretical foundations of the field is scarce and many questions still remain. This article examines business models promoting various aspects of sustainable development and tests the explanatory power of two theoretical approaches, namely the resource based view of the firm and transaction cost theory regarding their emergence and successful market performance. Through the examples of industrial ecology and the sharing economy the author shows that a sharp reduction of transaction costs (e.g. in the form of internet based systems) coupled with resources widely available but not utilised before may result in fast growing new markets. This research also provides evidence regarding the notion that these two theoretical approaches can complement each other in explaining corporate behaviour.
Resumo:
Production Planning and Control (PPC) systems have grown and changed because of the developments in planning tools and models as well as the use of computers and information systems in this area. Though so much is available in research journals, practice of PPC is lagging behind and does not use much from published research. The practices of PPC in SMEs lag behind because of many reasons, which need to be explored. This research work deals with the effect of identified variables such as forecasting, planning and control methods adopted, demographics of the key person, standardization practices followed, effect of training, learning and IT usage on firm performance. A model and framework has been developed based on literature. Empirical testing of the model has been done after collecting data using a questionnaire schedule administered among the selected respondents from Small and Medium Enterprises (SMEs) in India. Final data included 382 responses. Hypotheses linking SME performance with the use of forecasting, planning and controlling were formed and tested. Exploratory factor analysis was used for data reduction and for identifying the factor structure. High and low performing firms were classified using a Logistic Regression model. A confirmatory factor analysis was used to study the structural relationship between firm performance and dependent variables.
Resumo:
Queueing theory is the mathematical study of ‘queue’ or ‘waiting lines’ where an item from inventory is provided to the customer on completion of service. A typical queueing system consists of a queue and a server. Customers arrive in the system from outside and join the queue in a certain way. The server picks up customers and serves them according to certain service discipline. Customers leave the system immediately after their service is completed. For queueing systems, queue length, waiting time and busy period are of primary interest to applications. The theory permits the derivation and calculation of several performance measures including the average waiting time in the queue or the system, mean queue length, traffic intensity, the expected number waiting or receiving service, mean busy period, distribution of queue length, and the probability of encountering the system in certain states, such as empty, full, having an available server or having to wait a certain time to be served.
Resumo:
This paper provides the first investigation about bond mutual fund performance during recession and expansion periods separately. Based on multi-factor performance evaluation models, results show that bond funds significantly underperform the market during both phases of the business cycle. Nevertheless, unlike equity funds, bond funds exhibit considerably higher alphas during good economic states than during market downturns. These results, however, seem entirely driven by the global financial crisis subperiod. In contrast, during the recession associated to the Euro sovereign debt crisis, bond funds are able to accomplish neutral performance. This improved performance throughout the debt crisis seems to be related to more conservative investment strategies, which reflect an increase in managers’ risk aversion.