851 resultados para Multiple criteria assessment
Resumo:
Recent research into resting-state functional magnetic resonance imaging (fMRI) has shown that the brain is very active during rest. This thesis work utilizes blood oxygenation level dependent (BOLD) signals to investigate the spatial and temporal functional network information found within resting-state data, and aims to investigate the feasibility of extracting functional connectivity networks using different methods as well as the dynamic variability within some of the methods. Furthermore, this work looks into producing valid networks using a sparsely-sampled sub-set of the original data.
In this work we utilize four main methods: independent component analysis (ICA), principal component analysis (PCA), correlation, and a point-processing technique. Each method comes with unique assumptions, as well as strengths and limitations into exploring how the resting state components interact in space and time.
Correlation is perhaps the simplest technique. Using this technique, resting-state patterns can be identified based on how similar the time profile is to a seed region’s time profile. However, this method requires a seed region and can only identify one resting state network at a time. This simple correlation technique is able to reproduce the resting state network using subject data from one subject’s scan session as well as with 16 subjects.
Independent component analysis, the second technique, has established software programs that can be used to implement this technique. ICA can extract multiple components from a data set in a single analysis. The disadvantage is that the resting state networks it produces are all independent of each other, making the assumption that the spatial pattern of functional connectivity is the same across all the time points. ICA is successfully able to reproduce resting state connectivity patterns for both one subject and a 16 subject concatenated data set.
Using principal component analysis, the dimensionality of the data is compressed to find the directions in which the variance of the data is most significant. This method utilizes the same basic matrix math as ICA with a few important differences that will be outlined later in this text. Using this method, sometimes different functional connectivity patterns are identifiable but with a large amount of noise and variability.
To begin to investigate the dynamics of the functional connectivity, the correlation technique is used to compare the first and second halves of a scan session. Minor differences are discernable between the correlation results of the scan session halves. Further, a sliding window technique is implemented to study the correlation coefficients through different sizes of correlation windows throughout time. From this technique it is apparent that the correlation level with the seed region is not static throughout the scan length.
The last method introduced, a point processing method, is one of the more novel techniques because it does not require analysis of the continuous time points. Here, network information is extracted based on brief occurrences of high or low amplitude signals within a seed region. Because point processing utilizes less time points from the data, the statistical power of the results is lower. There are also larger variations in DMN patterns between subjects. In addition to boosted computational efficiency, the benefit of using a point-process method is that the patterns produced for different seed regions do not have to be independent of one another.
This work compares four unique methods of identifying functional connectivity patterns. ICA is a technique that is currently used by many scientists studying functional connectivity patterns. The PCA technique is not optimal for the level of noise and the distribution of the data sets. The correlation technique is simple and obtains good results, however a seed region is needed and the method assumes that the DMN regions is correlated throughout the entire scan. Looking at the more dynamic aspects of correlation changing patterns of correlation were evident. The last point-processing method produces a promising results of identifying functional connectivity networks using only low and high amplitude BOLD signals.
Resumo:
The goals of this program of research were to examine the link between self-reported vulvar pain and clinical diagnoses, and to create a user-friendly assessment tool to aid in that process. These goals were undertaken through a series of four empirical studies (Chapters 2-6): one archival study, two online studies, and one study conducted in a Women’s Health clinic. In Chapter 2, the link between self-report and clinical diagnosis was confirmed by extracting data from multiple studies conducted in the Sexual Health Research Laboratory over the course of several years. We demonstrated the accuracy of diagnosis based on multiple factors, and explored the varied gynecological presentation of different diagnostic groups. Chapter 3 was based on an online study designed to create the Vulvar Pain Assessment Questionnaire (VPAQ) inventory. Following the construct validation approach, a large pool of potential items was created to capture a broad selection of vulvar pain symptoms. Nearly 300 participants completed the entire item pool, and a series of factor analyses were utilized to narrow down the items and create scales/subscales. Relationships were computed among subscales and validated scales to establish convergent and discriminant validity. Chapters 4 and 5 were conducted in the Department of Obstetrics & Gynecology at Oregon Health & Science University. The brief screening version of the VPAQ was employed with patients of the Program in Vulvar Health at the Center for Women’s Health. The accuracy and usefulness of the VPAQscreen was determined from the perspective of patients as well as their health care providers, and the treatment-seeking experiences of patients was explored. Finally, a second online study was conducted to confirm the factor structure, internal consistency, and test-retest reliability of the VPAQ inventory. The results presented in these chapters confirm the link between targeted questions and accurate diagnoses, and provide a guideline that is useful and accessible for providers and patients.
Resumo:
A comprehensive expert consultation was conducted in order to assess the status, trends and the most important drivers of change in the abundance and geographical distribution of kelp forests in European waters. This consultation included an on-line questionnaire, results from a workshop and data provided by a selected group of experts working on kelp forest mapping and eco-evolutionary research. Differences in status and trends according to geographical areas, species identity and small-scale variations within the same habitat where shown by assembling and mapping kelp distribution and trend data. Significant data gaps for some geographical regions, like the Mediterranean and the southern Iberian Peninsula, were also identified. The data used for this study confirmed a general trend with decreasing abundance of some native kelp species at their southern distributional range limits and increasing abundance in other parts of their distribution (Saccharina latissima and Saccorhiza polyschides). The expansion of the introduced species Undaria pinnatifida was also registered. Drivers of observed changes in kelp forests distribution and abundance were assessed using experts’ opinions. Multiple possible drivers were identified, including global warming, sea urchin grazing, harvesting, pollutionand fishing pressure, and their impact varied between geographical areas. Overall, the results highlight major threats for these ecosystems but also opportunities for conservation. Major requirements to ensure adequate protection of coastal kelp ecosystems along European coastlines are discussed, based on the local to regional gaps detected in the study.
Resumo:
A comprehensive expert consultation was conducted in order to assess the status, trends and the most important drivers of change in the abundance and geographical distribution of kelp forests in European waters. This consultation included an on-line questionnaire, results from a workshop and data provided by a selected group of experts working on kelp forest mapping and eco-evolutionary research. Differences in status and trends according to geographical areas, species identity and small-scale variations within the same habitat where shown by assembling and mapping kelp distribution and trend data. Significant data gaps for some geographical regions, like the Mediterranean and the southern Iberian Peninsula, were also identified. The data used for this study confirmed a general trend with decreasing abundance of some native kelp species at their southern distributional range limits and increasing abundance in other parts of their distribution (Saccharina latissima and Saccorhiza polyschides). The expansion of the introduced species Undaria pinnatifida was also registered. Drivers of observed changes in kelp forests distribution and abundance were assessed using experts’ opinions. Multiple possible drivers were identified, including global warming, sea urchin grazing, harvesting, pollutionand fishing pressure, and their impact varied between geographical areas. Overall, the results highlight major threats for these ecosystems but also opportunities for conservation. Major requirements to ensure adequate protection of coastal kelp ecosystems along European coastlines are discussed, based on the local to regional gaps detected in the study.
Resumo:
Our key contribution is a flexible, automated marking system that adds desirable functionality to existing E-Assessment systems. In our approach, any given E-Assessment system is relegated to a data-collection mechanism, whereas marking and the generation and distribution of personalised per-student feedback is handled separately by our own system. This allows content-rich Microsoft Word feedback documents to be generated and distributed to every student simultaneously according to a per-assessment schedule.
The feedback is adaptive in that it corresponds to the answers given by the student and provides guidance on where they may have gone wrong. It is not limited to simple multiple choice which are the most prescriptive question type offered by most E-Assessment Systems and as such most straightforward to mark consistently and provide individual per-alternative feedback strings. It is also better equipped to handle the use of mathematical symbols and images within the feedback documents which is more flexible than existing E-Assessment systems, which can only handle simple text strings.
As well as MCQs the system reliably and robustly handles Multiple Response, Text Matching and Numeric style questions in a more flexible manner than Questionmark: Perception and other E-Assessment Systems. It can also reliably handle multi-part questions where the response to an earlier question influences the answer to a later one and can adjust both scoring and feedback appropriately.
New question formats can be added at any time provided a corresponding marking method conforming to certain templates can also be programmed. Indeed, any question type for which a programmatic method of marking can be devised may be supported by our system. Furthermore, since the student’s response to each is question is marked programmatically, our system can be set to allow for minor deviations from the correct answer, and if appropriate award partial marks.
Resumo:
PROGNOSTIC FACTORS PREDICTING FUNCTIONAL OUTCOME AT FOUR MONTHS FOLLOWING ACUTE ANKLE SPRAINBleakley C.M.1, O'Connor S.R.1, Tully M.A.2, Rocke L.G.3, MacAuley D.C.1, Bradbury I.4, Keegan S.4, McDonough S.M.11University of Ulster, Health & Rehabilitation Sciences Research Institute, Newtownabbey, United Kingdom, 2Queen's University, UKCRC Centre of Excellence for Public Health (NI), Belfast, United Kingdom, 3Royal Victoria Hospital, Department of Emergency Medicine, Belfast, United Kingdom, 4Frontier Science (Scotland), Kincraig, Inverness-shire, United KingdomPurpose: To identify clinically relevant factors assessed following acute ankle sprain that predict functional recovery at four months post-injury.Relevance: Ankle sprains are one of the most common musculoskeletal injuries with an estimated 5000 new cases occurring each day in the United Kingdom. In the acute phase, ankle sprains may be associated with pain and loss of function. In the longer-term there is a risk of residual problems including chronic pain or reinjury. Few studies have sought to examine factors associated with a poor long-term prognosis.Participants: 101 patients (Age: Mean (SD) 25.9 (7.9) years; Body Mass Index (BMI): 25.3 (3.5) kg/m2) with an acute grade 1 or 2 ankle sprain attending an accident and emergency department or sports injury clinic. Exclusion criteria included complete (grade 3) rupture of the ankle ligament complex, bony ankle injury or multiple injuries.Methods: Participants were allocated as part of a randomised controlled trial to an accelerated intervention incorporating intermittent ice and early therapeutic exercise or a standard protection, rest, ice, compression, and elevation intervention for one week. Treatment was then standardised in both groups and consisted of ankle rehabilitation exercises focusing on muscle strengthening, neuromuscular training, and sports specific functional exercises for a period of approximately four to six weeks. On initial assessment age, gender, mechanism of injury, presence of an audible pop or snap and the presence of contact during the injury were recorded. The following factors were also recorded at baseline and at one and four weeks post-injury: weight-bearing dorsi-flexion test, lateral hop test, presence of medial pain on palpation and a positive impingement sign. Functional status was assessed using the Karlsson score at baseline, at week four and at four months. Reinjury rates were recorded throughout the intervention phase and at four months.Analysis: A mixed between-within subjects analysis of variance (ANOVA) was used to determine the effect of each factor on functional status at week four and at four months. Significance was set at a Bonferroni adjusted level of 0.0125 (0.05/4).Results: Eighty-five participants (84%) were available at final follow-up assessment. Pain on weight-bearing dorsi-flexion and lateral hop tests at week four were both associated with a lower functional score at four months post-injury (P = 0.011 and P = 0.001). No other significant interactions were observed at any other timepoint (baseline or week one). There were only two reinjuries within the four month follow-up period with a further two reported at approximately six months post-injury. We were therefore unable to determine whether any factors were associated with an increased risk of reinjury.Conclusions: Potential prognostic factors on initial or early examination after acute ankle sprain did not help predict functional recovery at four months post-injury. However, pain on weight-bearing dorsi-flexion and lateral hop tests observed at four weeks were associated with a slower rate of recovery.Implications: Some clinical tests may help identify patients at risk of poor functional recovery after acute ankle sprain. However, further work is required to examine factors which may be predictive on initial assessment.Key-words: 1. Prognostic factors 2. Recovery 3. Ankle sprainFunding acknowledgements: Physiotherapy Research Foundation, Chartered Society of Physiotherapy, Strategic Priority Fund; Department of Employment and Learning, Northern Ireland.Ethics approval: Office for Research Ethics Committee (UK).
Resumo:
The hypervariable regions of immunoglobulin heavy-chain (IgH) rearrangements provide a specific tumor marker in multiple myeloma (MM). Recently, real-time PCR assays have been developed in order to quantify the number of tumor cells after treatment. However, these strategies are hampered by the presence of somatic hypermutation (SH) in VDJH rearrangements from multiple myeloma (MM) patients, which causes mismatches between primers and/or probes and the target, leading to a nonaccurate quantification of tumor cells. Our group has recently described a 60% incidence of incomplete DJH rearrangements in MM patients, with no or very low rates of SH. In this study, we compare the efficiency of a real-time PCR approach for the analysis of both complete and incomplete IgH rearrangements in eight MM patients using only three JH consensus probes. We were able to design an allele-specific oligonucleotide for both the complete and incomplete rearrangement in all patients. DJH rearrangements fulfilled the criteria of effectiveness for real-time PCR in all samples (ie no unspecific amplification, detection of less than 10 tumor cells within 10(5) polyclonal background and correlation coefficients of standard curves higher than 0.98). By contrast, only three out of eight VDJH rearrangements fulfilled these criteria. Further analyses showed that the remaining five VDJH rearrangements carried three or more somatic mutations in the probe and primer sites, leading to a dramatic decrease in the melting temperature. These results support the use of incomplete DJH rearrangements instead of complete somatically mutated VDJH rearrangements for investigation of minimal residual disease in multiple myeloma.
Resumo:
The introduction of a poster presentation as a formative assessment method over a multiple choice examination after the first phase of a three phase “health and well-being” module in an undergraduate nursing degree programme was greeted with a storm of criticism from fellow lecturers stating that poster presentations are not valid or reliable and totally irrelevant to the assessment of learning in the module. This paper seeks to investigate these criticisms by investigating the literature regarding producing nurses fit for practice, nurse curriculum development and wider nurse education, the purpose of assessment, validity and reliability to critically evaluate the poster presentation as a legitimate assessment method for these aims.
Resumo:
There has been an increasing interest in the development of new methods using Pareto optimality to deal with multi-objective criteria (for example, accuracy and time complexity). Once one has developed an approach to a problem of interest, the problem is then how to compare it with the state of art. In machine learning, algorithms are typically evaluated by comparing their performance on different data sets by means of statistical tests. Standard tests used for this purpose are able to consider jointly neither performance measures nor multiple competitors at once. The aim of this paper is to resolve these issues by developing statistical procedures that are able to account for multiple competing measures at the same time and to compare multiple algorithms altogether. In particular, we develop two tests: a frequentist procedure based on the generalized likelihood-ratio test and a Bayesian procedure based on a multinomial-Dirichlet conjugate model. We further extend them by discovering conditional independences among measures to reduce the number of parameters of such models, as usually the number of studied cases is very reduced in such comparisons. Data from a comparison among general purpose classifiers is used to show a practical application of our tests.
Resumo:
We investigate the impact of co-channel interference on the security performance of multiple amplify-and-forward (AF) relaying networks, where N intermediate AF relays assist the data transmission from the source to the destination. The relays are corrupted by multiple co-channel interferers, and the information transmitted from the relays to destination can be overheard by the eavesdropper. In order to deal with the interference and wiretap, the best out of N relays is selected for security enhancement. To this end, we derive a novel lower bound on the secrecy outage probability (SOP), which is then utilized to present two best relay selection criteria, based on the instantaneous and statistical channel information of the interfering links. For these criteria and the conventional maxmin criterion, we quantify the impact of co-channel interference and relay selection by deriving the lower bound on the SOP. Furthermore, we derive the asymptotic SOP for each criterion, to explicitly reveal the impact of transmit power allocation among interferers on the secrecy performance, which offers valuable insights into practical design. We demonstrate that all selection criteria achieve full secrecy diversity order N, while the proposed in this paper two criteria outperform the conventional max-min scheme.
Resumo:
OBJECTIVE: To determine risk of Down syndrome (DS) in multiple relative to singleton pregnancies, and compare prenatal diagnosis rates and pregnancy outcome.
DESIGN: Population-based prevalence study based on EUROCAT congenital anomaly registries.
SETTING: Eight European countries.
POPULATION: 14.8 million births 1990-2009; 2.89% multiple births.
METHODS: DS cases included livebirths, fetal deaths from 20 weeks, and terminations of pregnancy for fetal anomaly (TOPFA). Zygosity is inferred from like/unlike sex for birth denominators, and from concordance for DS cases.
MAIN OUTCOME MEASURES: Relative risk (RR) of DS per fetus/baby from multiple versus singleton pregnancies and per pregnancy in monozygotic/dizygotic versus singleton pregnancies. Proportion of prenatally diagnosed and pregnancy outcome.
STATISTICAL ANALYSIS: Poisson and logistic regression stratified for maternal age, country and time.
RESULTS: Overall, the adjusted (adj) RR of DS for fetus/babies from multiple versus singleton pregnancies was 0.58 (95% CI 0.53-0.62), similar for all maternal ages except for mothers over 44, for whom it was considerably lower. In 8.7% of twin pairs affected by DS, both co-twins were diagnosed with the condition. The adjRR of DS for monozygotic versus singleton pregnancies was 0.34 (95% CI 0.25-0.44) and for dizygotic versus singleton pregnancies 1.34 (95% CI 1.23-1.46). DS fetuses from multiple births were less likely to be prenatally diagnosed than singletons (adjOR 0.62 [95% CI 0.50-0.78]) and following diagnosis less likely to be TOPFA (adjOR 0.40 [95% CI 0.27-0.59]).
CONCLUSIONS: The risk of DS per fetus/baby is lower in multiple than singleton pregnancies. These estimates can be used for genetic counselling and prenatal screening.
Resumo:
L’Internet Physique (IP) est une initiative qui identifie plusieurs symptômes d’inefficacité et non-durabilité des systèmes logistiques et les traite en proposant un nouveau paradigme appelé logistique hyperconnectée. Semblable à l’Internet Digital, qui relie des milliers de réseaux d’ordinateurs personnels et locaux, IP permettra de relier les systèmes logistiques fragmentés actuels. Le but principal étant d’améliorer la performance des systèmes logistiques des points de vue économique, environnemental et social. Se concentrant spécifiquement sur les systèmes de distribution, cette thèse remet en question l’ordre de magnitude du gain de performances en exploitant la distribution hyperconnectée habilitée par IP. Elle concerne également la caractérisation de la planification de la distribution hyperconnectée. Pour répondre à la première question, une approche de la recherche exploratoire basée sur la modélisation de l’optimisation est appliquée, où les systèmes de distribution actuels et potentiels sont modélisés. Ensuite, un ensemble d’échantillons d’affaires réalistes sont créé, et leurs performances économique et environnementale sont évaluées en ciblant de multiples performances sociales. Un cadre conceptuel de planification, incluant la modélisation mathématique est proposé pour l’aide à la prise de décision dans des systèmes de distribution hyperconnectée. Partant des résultats obtenus par notre étude, nous avons démontré qu’un gain substantiel peut être obtenu en migrant vers la distribution hyperconnectée. Nous avons également démontré que l’ampleur du gain varie en fonction des caractéristiques des activités et des performances sociales ciblées. Puisque l’Internet physique est un sujet nouveau, le Chapitre 1 présente brièvement l’IP et hyper connectivité. Le Chapitre 2 discute les fondements, l’objectif et la méthodologie de la recherche. Les défis relevés au cours de cette recherche sont décrits et le type de contributions visés est mis en évidence. Le Chapitre 3 présente les modèles d’optimisation. Influencés par les caractéristiques des systèmes de distribution actuels et potentiels, trois modèles fondés sur le système de distribution sont développés. Chapitre 4 traite la caractérisation des échantillons d’affaires ainsi que la modélisation et le calibrage des paramètres employés dans les modèles. Les résultats de la recherche exploratoire sont présentés au Chapitre 5. Le Chapitre 6 décrit le cadre conceptuel de planification de la distribution hyperconnectée. Le chapitre 7 résume le contenu de la thèse et met en évidence les contributions principales. En outre, il identifie les limites de la recherche et les avenues potentielles de recherches futures.
Resumo:
One of the biggest challenges that contaminant hydrogeology is facing, is how to adequately address the uncertainty associated with model predictions. Uncertainty arise from multiple sources, such as: interpretative error, calibration accuracy, parameter sensitivity and variability. This critical issue needs to be properly addressed in order to support environmental decision-making processes. In this study, we perform Global Sensitivity Analysis (GSA) on a contaminant transport model for the assessment of hydrocarbon concentration in groundwater. We provide a quantification of the environmental impact and, given the incomplete knowledge of hydrogeological parameters, we evaluate which are the most influential, requiring greater accuracy in the calibration process. Parameters are treated as random variables and a variance-based GSA is performed in a optimized numerical Monte Carlo framework. The Sobol indices are adopted as sensitivity measures and they are computed by employing meta-models to characterize the migration process, while reducing the computational cost of the analysis. The proposed methodology allows us to: extend the number of Monte Carlo iterations, identify the influence of uncertain parameters and lead to considerable saving computational time obtaining an acceptable accuracy.
Resumo:
The change in the economic world and the emergence of Internet as a tool for communication and integration among the markets have forced organizations to adopt a different structure, process-oriented with a focus on information management. Thus, information technology has gained prominence in the organizational context, increasing its complexity and range of services provided by this function. Moreover, outsourcing has become an important model for flexible corporate structure, helping organizations to achieve better results when carrying out their activities and processes and be more competitive. To make the IT outsourcing, it is necessary to follow certain steps that range from strategic assessment to the management of outsourced service. Such steps can influence the form of contracting services, varying the types of service providers and contractors. Thus, the study aimed to identify how this IT outsourcing process influences the use of models for contracting services. For this, a study was conducted in multiple cases study involving two companies in Rio Grande do Norte State, specifically the health sector. Data collection was carried out with the CIOs of the companies surveyed through semi-structured interviews. According to the results obtained, it was found that the outsourcing process more structured influences the use of a more advanced contracting model. However, there are features found in these steps carrying more clearly this influence, as the goals pursued by outsourcing, the criteria used in selecting the supplier, a contract negotiation, how to transition services and the use of methods management, but can vary depending on the level of maturity in the relationship of the companies examined. Moreover, it was found that the use of contracting model may also influence how it is developed the IT outsourcing process, requiring or not its more formalized and organization
Resumo:
The Short Term Assessment of Risk and Treatability is a structured judgement tool used to inform risk estimation for multiple adverse outcomes. In research, risk estimates outperform the tool's strength and vulnerability scales for violence prediction. Little is known about what its’component parts contribute to the assignment of risk estimates and how those estimates fare in prediction of non-violent adverse outcomes compared with the structured components. START assessment and outcomes data from a secure mental health service (N=84) was collected. Binomial and multinomial regression analyses determined the contribution of selected elements of the START structured domain and recent adverse risk events to risk estimates and outcomes prediction for violence, self-harm/suicidality, victimisation, and self-neglect. START vulnerabilities and lifetime history of violence, predicted the violence risk estimate; self-harm and victimisation estimates were predicted only by corresponding recent adverse events. Recent adverse events uniquely predicted all corresponding outcomes, with the exception of self-neglect which was predicted by the strength scale. Only for victimisation did the risk estimate outperform prediction based on the START components and recent adverse events. In the absence of recent corresponding risk behaviour, restrictions imposed on the basis of START-informed risk estimates could be unwarranted and may be unethical.