879 resultados para Multi-criteria Evaluation
Resumo:
The research performed during the PhD and presented in this thesis, allowed to make judgments on pushover analysis method about its application in evaluating the correct structural seismic response. In this sense, the extensive critical review of existing pushover procedures (illustrated in chapter 1) outlined their major issues related to assumptions and to hypothesis made in the application of the method. Therefore, with the purpose of evaluate the effectiveness of pushover procedures, a wide numerical investigation have been performed. In particular the attention has been focused on the structural irregularity on elevation, on the choice of the load vector and on its updating criteria. In the study eight pushover procedures have been considered, of which four are conventional type, one is multi-modal, and three are adaptive. The evaluation of their effectiveness in the identification of the correct dynamic structural response, has been done by performing several dynamic and static non-linear analysis on eight RC frames, characterized by different proprieties in terms of regularity in elevation. The comparisons of static and dynamic results have then permitted to evaluate the examined pushover procedures and to identify the expected margin of error by using each of them. Both on base shear-top displacement curves and on considered storey parameters, the best agreement with the dynamic response has been noticed on Multi-Modal Pushover procedure. Therefore the attention has been focused on Displacement-based Adative Pushover, coming to define for it an improvement strategy, and on modal combination rules, advancing an innovative method based on a quadratic combination of the modal shapes (QMC). This latter has been implemented in a conventional pushover procedure, whose results have been compared with those obtained by other multi-modal procedures. The development of research on pushover analysis is very important because the objective is to come to the definition of a simple, effective and reliable analysis method, indispensable tool in the seismic evaluation of new or existing structures.
Resumo:
Different tools have been used to set up and adopt the model for the fulfillment of the objective of this research. 1. The Model The base model that has been used is the Analytical Hierarchy Process (AHP) adapted with the aim to perform a Benefit Cost Analysis. The AHP developed by Thomas Saaty is a multicriteria decision - making technique which decomposes a complex problem into a hierarchy. It is used to derive ratio scales from both discreet and continuous paired comparisons in multilevel hierarchic structures. These comparisons may be taken from actual measurements or from a fundamental scale that reflects the relative strength of preferences and feelings. 2. Tools and methods 2.1. The Expert Choice Software The software Expert Choice is a tool that allows each operator to easily implement the AHP model in every stage of the problem. 2.2. Personal Interviews to the farms For this research, the farms of the region Emilia Romagna certified EMAS have been detected. Information has been given by EMAS center in Wien. Personal interviews have been carried out to each farm in order to have a complete and realistic judgment of each criteria of the hierarchy. 2.3. Questionnaire A supporting questionnaire has also been delivered and used for the interviews . 3. Elaboration of the data After data collection, the data elaboration has taken place. The software support Expert Choice has been used . 4. Results of the Analysis The result of the figures above (vedere altro documento) gives a series of numbers which are fractions of the unit. This has to be interpreted as the relative contribution of each element to the fulfillment of the relative objective. So calculating the Benefits/costs ratio for each alternative the following will be obtained: Alternative One: Implement EMAS Benefits ratio: 0, 877 Costs ratio: 0, 815 Benfit/Cost ratio: 0,877/0,815=1,08 Alternative Two: Not Implement EMAS Benefits ratio: 0,123 Costs ration: 0,185 Benefit/Cost ratio: 0,123/0,185=0,66 As stated above, the alternative with the highest ratio will be the best solution for the organization. This means that the research carried out and the model implemented suggests that EMAS adoption in the agricultural sector is the best alternative. It has to be noted that the ratio is 1,08 which is a relatively low positive value. This shows the fragility of this conclusion and suggests a careful exam of the benefits and costs for each farm before adopting the scheme. On the other part, the result needs to be taken in consideration by the policy makers in order to enhance their intervention regarding the scheme adoption on the agricultural sector. According to the AHP elaboration of judgments we have the following main considerations on Benefits: - Legal compliance seems to be the most important benefit for the agricultural sector since its rank is 0,471 - The next two most important benefits are Improved internal organization (ranking 0,230) followed by Competitive advantage (ranking 0, 221) mostly due to the sub-element Improved image (ranking 0,743) Finally, even though Incentives are not ranked among the most important elements, the financial ones seem to have been decisive on the decision making process. According to the AHP elaboration of judgments we have the following main considerations on Costs: - External costs seem to be largely more important than the internal ones (ranking 0, 857 over 0,143) suggesting that Emas costs over consultancy and verification remain the biggest obstacle. - The implementation of the EMS is the most challenging element regarding the internal costs (ranking 0,750).
Resumo:
The Gaia space mission is a major project for the European astronomical community. As challenging as it is, the processing and analysis of the huge data-flow incoming from Gaia is the subject of thorough study and preparatory work by the DPAC (Data Processing and Analysis Consortium), in charge of all aspects of the Gaia data reduction. This PhD Thesis was carried out in the framework of the DPAC, within the team based in Bologna. The task of the Bologna team is to define the calibration model and to build a grid of spectro-photometric standard stars (SPSS) suitable for the absolute flux calibration of the Gaia G-band photometry and the BP/RP spectrophotometry. Such a flux calibration can be performed by repeatedly observing each SPSS during the life-time of the Gaia mission and by comparing the observed Gaia spectra to the spectra obtained by our ground-based observations. Due to both the different observing sites involved and the huge amount of frames expected (≃100000), it is essential to maintain the maximum homogeneity in data quality, acquisition and treatment, and a particular care has to be used to test the capabilities of each telescope/instrument combination (through the “instrument familiarization plan”), to devise methods to keep under control, and eventually to correct for, the typical instrumental effects that can affect the high precision required for the Gaia SPSS grid (a few % with respect to Vega). I contributed to the ground-based survey of Gaia SPSS in many respects: with the observations, the instrument familiarization plan, the data reduction and analysis activities (both photometry and spectroscopy), and to the maintenance of the data archives. However, the field I was personally responsible for was photometry and in particular relative photometry for the production of short-term light curves. In this context I defined and tested a semi-automated pipeline which allows for the pre-reduction of imaging SPSS data and the production of aperture photometry catalogues ready to be used for further analysis. A series of semi-automated quality control criteria are included in the pipeline at various levels, from pre-reduction, to aperture photometry, to light curves production and analysis.
Resumo:
The PhD activity described in the document is part of the Microsatellite and Microsystem Laboratory of the II Faculty of Engineering, University of Bologna. The main objective is the design and development of a GNSS receiver for the orbit determination of microsatellites in low earth orbit. The development starts from the electronic design and goes up to the implementation of the navigation algorithms, covering all the aspects that are involved in this type of applications. The use of GPS receivers for orbit determination is a consolidated application used in many space missions, but the development of the new GNSS system within few years, such as the European Galileo, the Chinese COMPASS and the Russian modernized GLONASS, proposes new challenges and offers new opportunities to increase the orbit determination performances. The evaluation of improvements coming from the new systems together with the implementation of a receiver that is compatible with at least one of the new systems, are the main activities of the PhD. The activities can be divided in three section: receiver requirements definition and prototype implementation, design and analysis of the GNSS signal tracking algorithms, and design and analysis of the navigation algorithms. The receiver prototype is based on a Virtex FPGA by Xilinx, and includes a PowerPC processor. The architecture follows the software defined radio paradigm, so most of signal processing is performed in software while only what is strictly necessary is done in hardware. The tracking algorithms are implemented as a combination of Phase Locked Loop and Frequency Locked Loop for the carrier, and Delay Locked Loop with variable bandwidth for the code. The navigation algorithm is based on the extended Kalman filter and includes an accurate LEO orbit model.
Resumo:
Das Störungsbild der Hypochondrie stellt für die Betroffenen eine erhebliche Belastung und Beeinträchtigung dar und ist zudem von hoher gesundheitspolitischer Relevanz. Hieraus ergibt sich die Notwendigkeit für die Entwicklung und Evaluation wirkungsvoller Behandlungsansätze. Mit der vorliegenden Untersuchung wird die bisher umfangreichste Studie zur Wirksamkeit von gruppentherapeutischen Interventionen bei Patienten mit Hypochondrie beschrieben. Insgesamt nahmen 35 Patienten, die die DSM-IV-Kriterien der Hypochondrie erfüllten, an der Studie teil. Die durchgeführte Behandlung bestand aus insgesamt acht Gruppen- und sechs Einzelsitzungen. Zur Beurteilung des Therapieerfolgs wurden standardisierte Fragebogen und Einschätzungen der behandelnden Therapeuten eingeholt. Zudem wurde vor und nach der Behandlung die implizite Ängstlichkeit der Patienten mit Hilfe des Ängstlichkeits-IATs (Egloff & Schmukle, 2002) erfasst. Die Datenerhebung der Fragebögen erfolgte zu vier Messzeitpunkten. Eine Teilgruppe der Patienten (n = 10) konnte zudem über eine zweimonatige Wartezeit befragt werden. Ingesamt wurde die Therapie von den Patienten gut akzeptiert. Im Laufe der Behandlung zeigten sich auf den Selbstbeurteilungsverfahren umfangreiche Veränderungen im Erleben und Verhalten der Patienten. Es zeigte sich eine Reduktion von krankheitsbezogenen Kognitionen und Ängsten, eine Abnahme des Krankheitsverhaltens und eine Zunahme von Störungs- und Bewältigungswissen. Die Reduktion der hypochondrischen Symptomatik stellte sich als klinisch relevant heraus. Zudem zeigte sich eine Reduktion der allgemeinen Belastung und Ängstlichkeit sowie depressiver und körperlicher Symptome. Die Einschätzungen der behandelnden Therapeuten bestätigten die mittels Fragebogen ermittelten Befunde. Mit Hilfe des Ängstlichkeits-IATs konnte eine Veränderung des angstbezogenen Selbstkonzepts nachgewiesen werden. In einer Wartekontrollzeit zeigten sich nur geringfügige Reduktionen der hypochondrischen Symptomatik und keine bedeutsamen Reduktionen der allgemeinen Psychopathologie. Die Ergebnisse der durchgeführten Kombinationstherapie sind mit den Befunden bisheriger Evaluationen zur Effektivität von Einzeltherapien bei Hypochondrie vergleichbar. Die Befunde unterstreichen die Gleichwertigkeit von ökonomischeren gruppentherapeutischen Interventionen bei der Behandlung der Hypochondrie.
Resumo:
Despite several clinical tests that have been developed to qualitatively describe complex motor tasks by functional testing, these methods often depend on clinicians' interpretation, experience and training, which make the assessment results inconsistent, without the precision required to objectively assess the effect of the rehabilitative intervention. A more detailed characterization is required to fully capture the various aspects of motor control and performance during complex movements of lower and upper limbs. The need for cost-effective and clinically applicable instrumented tests would enable quantitative assessment of performance on a subject-specific basis, overcoming the limitations due to the lack of objectiveness related to individual judgment, and possibly disclosing subtle alterations that are not clearly visible to the observer. Postural motion measurements at additional locations, such as lower and upper limbs and trunk, may be necessary in order to obtain information about the inter-segmental coordination during different functional tests involved in clinical practice. With these considerations in mind, this Thesis aims: i) to suggest a novel quantitative assessment tool for the kinematics and dynamics evaluation of a multi-link kinematic chain during several functional motor tasks (i.e. squat, sit-to-stand, postural sway), using one single-axis accelerometer per segment, ii) to present a novel quantitative technique for the upper limb joint kinematics estimation, considering a 3-link kinematic chain during the Fugl-Meyer Motor Assessment and using one inertial measurement unit per segment. The suggested methods could have several positive feedbacks from clinical practice. The use of objective biomechanical measurements, provided by inertial sensor-based technique, may help clinicians to: i) objectively track changes in motor ability, ii) provide timely feedback about the effectiveness of administered rehabilitation interventions, iii) enable intervention strategies to be modified or changed if found to be ineffective, and iv) speed up the experimental sessions when several subjects are asked to perform different functional tests.
Resumo:
Die optische Eigenschaften sowie der Oberflächenverstärkungseffekt von rauen Metalloberflächen sowie Nanopartikeln wurden intensiv für den infraroten Bereich des Spektrums in der Literatur diskutiert. Für die Präparation solcher Oberflächen gibt es prinzipiell zwei verschiedene Strategien, zum einen können die Nanopartikel zuerst ex-situ synthetisiert werden, der zweite Ansatz beruht darauf, dass die Nanopartikel in-situ hergestellt und aufgewachsen werden. Hierbei wurden beide Ansätze ausgetestet, dabei stellte sich heraus, dass man nur mittels der in-situ Synthese der Goldnanopartikel in der Lage ist nanostrukturierte Oberflächen zu erhalten, welche elektronisch leitfähig sind, nicht zu rau sind, um eine Membranbildung zu ermöglichen und gleichzeitig einen optimalen Oberflächenverstärkungseffekt zeigen. Obwohl keine ideale Form der Nanopartikel mittels der in-situ Synthese erhalten werden können, verhalten sich diese dennoch entsprechend der Theorie des Oberflächenverstärkungseffekts. Optimierungen der Form und Grösse der Nanopartikel führten in dieser Arbeit zu einer Optimierung des Verstärkungseffekts. Solche optimierten Oberflächen konnten einfach reproduziert werden und zeichnen sich durch eine hohe Stabilität aus. Der so erhaltene Oberflächenverstärkungseffekt beträgt absolut 128 verglichen mit dem belegten ATR-Kristall ohne Nanopartikel oder etwa 6 mal, verglichen mit der Oberfläche, die bis jetzt auch in unserer Gruppe verwendet wurde. Daher können nun Spektren erhalten werden, welche ein deutlich besseres Signal zu Rauschverhältnis (SNR) aufweisen, was die Auswertung und Bearbeitung der erhaltenen Spektren deutlich vereinfacht und verkürzt.rnNach der Optimierung der verwendeten Metalloberfläche und der verwendeten Messparameter am Beispiel von Cytochrom C wurde nun an der Oberflächenbelegung der deutlich größeren Cytochrom c Oxidase gearbeitet. Hierfür wurde der DTNTA-Linker ex-situ synthetisiert. Anschließend wurden gemischte Monolagen (self assembeld monolayers) aus DTNTA und DTP hergestellt. Die NTA-Funktionalität ist für die Anbindung der CcO mit der his-tag Technologie verantwortlich. Die Kriterien für eine optimale Linkerkonzentration waren die elektrischen Parameter der Schicht vor und nach Rekonstitution in eine Lipidmembran, sowie Elektronentransferraten bestimmt durch elektrochemische Messungen. Erst mit diesem optimierten System, welches zuverlässig und reproduzierbar funktioniert, konnten weitere Messungen an der CcO begonnen werden. Aus elektrochemischen Messungen war bekannt, dass die CcO durch direkten Elektronentransfer unter Sauerstoffsättigung in einen aktivierten Zustand überführt werden kann. Dieser aktivierte Zustand zeichnet sich durch eine Verschiebung der Redoxpotentiale um etwa 400mV gegenüber dem aus Gleichgewichts-Titrationen bekannten Redoxpotential aus. Durch SEIRAS konnte festgestellt werden, dass die Reduktion bzw. Oxidation aller Redoxzentren tatsächlich bei den in der Cyclovoltammetrie gemessenen Potentialen erfolgt. Außerdem ergaben die SEIRA-Spektren, dass durch direkten Elektronentransfer gravierende Konformationsänderungen innerhalb des Proteins stattfinden. rnBisher war man davon ausgegangen, aufgrund des Elektronentransfers mittels Mediatoren, dass nur minimale Konformationsänderungen beteiligt sind. Vor allem konnte erstmaligrnder aktivierte und nicht aktivierte Zustand der Cytochrom c Oxidase spektroskopisch nachweisen werden.rn
Resumo:
This thesis proposes an integrated holistic approach to the study of neuromuscular fatigue in order to encompass all the causes and all the consequences underlying the phenomenon. Starting from the metabolic processes occurring at the cellular level, the reader is guided toward the physiological changes at the motorneuron and motor unit level and from this to the more general biomechanical alterations. In Chapter 1 a list of the various definitions for fatigue spanning several contexts has been reported. In Chapter 2, the electrophysiological changes in terms of motor unit behavior and descending neural drive to the muscle have been studied extensively as well as the biomechanical adaptations induced. In Chapter 3 a study based on the observation of temporal features extracted from sEMG signals has been reported leading to the need of a more robust and reliable indicator during fatiguing tasks. Therefore, in Chapter 4, a novel bi-dimensional parameter is proposed. The study on sEMG-based indicators opened a scenario also on neurophysiological mechanisms underlying fatigue. For this purpose, in Chapter 5, a protocol designed for the analysis of motor unit-related parameters during prolonged fatiguing contractions is presented. In particular, two methodologies have been applied to multichannel sEMG recordings of isometric contractions of the Tibialis Anterior muscle: the state-of-the-art technique for sEMG decomposition and a coherence analysis on MU spike trains. The importance of a multi-scale approach has been finally highlighted in the context of the evaluation of cycling performance, where fatigue is one of the limiting factors. In particular, the last chapter of this thesis can be considered as a paradigm: physiological, metabolic, environmental, psychological and biomechanical factors influence the performance of a cyclist and only when all of these are kept together in a novel integrative way it is possible to derive a clear model and make correct assessments.
Resumo:
Nowadays the rise of non-recurring engineering (NRE) costs associated with complexity is becoming a major factor in SoC design, limiting both scaling opportunities and the flexibility advantages offered by the integration of complex computational units. The introduction of embedded programmable elements can represent an appealing solution, able both to guarantee the desired flexibility and upgradabilty and to widen the SoC market. In particular embedded FPGA (eFPGA) cores can provide bit-level optimization for those applications which benefits from synthesis, paying on the other side in terms of performance penalties and area overhead with respect to standard cell ASIC implementations. In this scenario this thesis proposes a design methodology for a synthesizable programmable device designed to be embedded in a SoC. A soft-core embedded FPGA (eFPGA) is hence presented and analyzed in terms of the opportunities given by a fully synthesizable approach, following an implementation flow based on Standard-Cell methodology. A key point of the proposed eFPGA template is that it adopts a Multi-Stage Switching Network (MSSN) as the foundation of the programmable interconnects, since it can be efficiently synthesized and optimized through a standard cell based implementation flow, ensuring at the same time an intrinsic congestion-free network topology. The evaluation of the flexibility potentialities of the eFPGA has been performed using different technology libraries (STMicroelectronics CMOS 65nm and BCD9s 0.11μm) through a design space exploration in terms of area-speed-leakage tradeoffs, enabled by the full synthesizability of the template. Since the most relevant disadvantage of the adopted soft approach, compared to a hardcore, is represented by a performance overhead increase, the eFPGA analysis has been made targeting small area budgets. The generation of the configuration bitstream has been obtained thanks to the implementation of a custom CAD flow environment, and has allowed functional verification and performance evaluation through an application-aware analysis.
Resumo:
Classic group recommender systems focus on providing suggestions for a fixed group of people. Our work tries to give an inside look at design- ing a new recommender system that is capable of making suggestions for a sequence of activities, dividing people in subgroups, in order to boost over- all group satisfaction. However, this idea increases problem complexity in more dimensions and creates great challenge to the algorithm’s performance. To understand the e↵ectiveness, due to the enhanced complexity and pre- cise problem solving, we implemented an experimental system from data collected from a variety of web services concerning the city of Paris. The sys- tem recommends activities to a group of users from two di↵erent approaches: Local Search and Constraint Programming. The general results show that the number of subgroups can significantly influence the Constraint Program- ming Approaches’s computational time and e�cacy. Generally, Local Search can find results much quicker than Constraint Programming. Over a lengthy period of time, Local Search performs better than Constraint Programming, with similar final results.
Resumo:
Screening people without symptoms of disease is an attractive idea. Screening allows early detection of disease or elevated risk of disease, and has the potential for improved treatment and reduction of mortality. The list of future screening opportunities is set to grow because of the refinement of screening techniques, the increasing frequency of degenerative and chronic diseases, and the steadily growing body of evidence on genetic predispositions for various diseases. But how should we decide on the diseases for which screening should be done and on recommendations for how it should be implemented? We use the examples of prostate cancer and genetic screening to show the importance of considering screening as an ongoing population-based intervention with beneficial and harmful effects, and not simply the use of a test. Assessing whether screening should be recommended and implemented for any named disease is therefore a multi-dimensional task in health technology assessment. There are several countries that already use established processes and criteria to assess the appropriateness of screening. We argue that the Swiss healthcare system needs a nationwide screening commission mandated to conduct appropriate evidence-based evaluation of the impact of proposed screening interventions, to issue evidence-based recommendations, and to monitor the performance of screening programmes introduced. Without explicit processes there is a danger that beneficial screening programmes could be neglected and that ineffective, and potentially harmful, screening procedures could be introduced.
Resumo:
The multi-target screening method described in this work allows the simultaneous detection and identification of 700 drugs and metabolites in biological fluids using a hybrid triple-quadrupole linear ion trap mass spectrometer in a single analytical run. After standardization of the method, the retention times of 700 compounds were determined and transitions for each compound were selected by a "scheduled" survey MRM scan, followed by an information-dependent acquisition using the sensitive enhanced product ion scan of a Q TRAP hybrid instrument. The identification of the compounds in the samples analyzed was accomplished by searching the tandem mass spectrometry (MS/MS) spectra against the library we developed, which contains electrospray ionization-MS/MS spectra of over 1,250 compounds. The multi-target screening method together with the library was included in a software program for routine screening and quantitation to achieve automated acquisition and library searching. With the help of this software application, the time for evaluation and interpretation of the results could be drastically reduced. This new multi-target screening method has been successfully applied for the analysis of postmortem and traffic offense samples as well as proficiency testing, and complements screening with immunoassays, gas chromatography-mass spectrometry, and liquid chromatography-diode-array detection. Other possible applications are analysis in clinical toxicology (for intoxication cases), in psychiatry (antidepressants and other psychoactive drugs), and in forensic toxicology (drugs and driving, workplace drug testing, oral fluid analysis, drug-facilitated sexual assault).
Resumo:
Introduction Acute hemodynamic instability increases morbidity and mortality. We investigated whether early non-invasive cardiac output monitoring enhances hemodynamic stabilization and improves outcome. Methods A multicenter, randomized controlled trial was conducted in three European university hospital intensive care units in 2006 and 2007. A total of 388 hemodynamically unstable patients identified during their first six hours in the intensive care unit (ICU) were randomized to receive either non-invasive cardiac output monitoring for 24 hrs (minimally invasive cardiac output/MICO group; n = 201) or usual care (control group; n = 187). The main outcome measure was the proportion of patients achieving hemodynamic stability within six hours of starting the study. Results The number of hemodynamic instability criteria at baseline (MICO group mean 2.0 (SD 1.0), control group 1.8 (1.0); P = .06) and severity of illness (SAPS II score; MICO group 48 (18), control group 48 (15); P = .86)) were similar. At 6 hrs, 45 patients (22%) in the MICO group and 52 patients (28%) in the control group were hemodynamically stable (mean difference 5%; 95% confidence interval of the difference -3 to 14%; P = .24). Hemodynamic support with fluids and vasoactive drugs, and pulmonary artery catheter use (MICO group: 19%, control group: 26%; P = .11) were similar in the two groups. The median length of ICU stay was 2.0 (interquartile range 1.2 to 4.6) days in the MICO group and 2.5 (1.1 to 5.0) days in the control group (P = .38). The hospital mortality was 26% in the MICO group and 21% in the control group (P = .34). Conclusions Minimally-invasive cardiac output monitoring added to usual care does not facilitate early hemodynamic stabilization in the ICU, nor does it alter the hemodynamic support or outcome. Our results emphasize the need to evaluate technologies used to measure stroke volume and cardiac output--especially their impact on the process of care--before any large-scale outcome studies are attempted.
Resumo:
The clinical validity of at-risk criteria of psychosis had been questioned based on epidemiological studies that have reported much higher prevalence and annual incidence rates of psychotic-like experiences (PLEs as assessed by either self rating questionnaires or layperson interviews) in the general population than of the clinical phenotype of psychotic disorders (van Os et al., 2009). Thus, it is unclear whether “current at-risk criteria reflect behaviors so common among adolescents and young adults that a valid distinction between ill and non-ill persons is difficult” (Carpenter, 2009). We therefore assessed the 3-month prevalence of at-risk criteria by means of telephone interviews in a randomly drawn general population sample from the at-risk age segment (age 16–35 years) in the Canton Bern, Switzerland. Eighty-five of 102 subjects had valid phone numbers, 21 of these subjects refused (although 6 of them signaled willingness to participate at a later time), 4 could not be contacted. Sixty subjects (71% of the enrollment fraction) participated. Two participants met exclusion criteria (one for being psychotic, one for lack of language skills). Twenty-two at-risk symptoms were assessed for their prevalence and severity within the 3 months prior to the interview by trained clinical raters using (i) the Structured Interview for Prodromal Syndromes (SIPS; Miller et al., 2002) for the evaluation of 5 attenuated psychotic and 3 brief limited intermittent psychotic symptoms (APS, BLIPS) as well as state-trait criteria of the ultra-high-risk (UHR) criteria and (ii) the Schizophrenia Proneness Instrument, Adult version (SPI-A; Schultze-Lutter et al., 2007) for the evaluation of the 14 basic symptoms included in COPER and COGDIS (Schultze-Lutter et al., 2008). Further, psychiatric axis I diagnoses were assessed by means of the Mini-International Neuropsychiatric Interview, M.I.N.I. (Sheehan et al., 1998), and psychosocial functioning by the Scale of Occupational and Functional Assessment (SOFAS; APA, 1994). All interviewees felt ‘rather’ or ‘very’ comfortable with the interview. Of the 58 included subjects, only 1 (2%) fulfilled APS criteria by reporting the attenuated, non-delusional idea of his mind being literally read by others at a frequency of 2–3 times a week that had newly occurred 6 weeks ago. BLIPS, COPER, COGDIS or state-trait UHR criteria were not reported. Yet, twelve subjects (21%) described sub-threshold at-risk symptoms: 7 (12%) reported APS relevant symptoms but did not meet time/frequency criteria of APS, and 9 (16%) reported COPER and/or COGDIS relevant basic symptoms but at an insufficient frequency or as a trait lacking increase in severity; 4 of these 12 subjects reported both sub-threshold APS and sub-threshold basic symptoms. Table 1 displays type and frequency of the sub-threshold at-risk symptoms.
Resumo:
Objectives: Previous research conducted in the late 1980s suggested that vehicle impacts following an initial barrier collision increase severe occupant injury risk. Now over 25years old, the data are no longer representative of the currently installed barriers or the present US vehicle fleet. The purpose of this study is to provide a present-day assessment of secondary collisions and to determine if current full-scale barrier crash testing criteria provide an indication of secondary collision risk for real-world barrier crashes. Methods: To characterize secondary collisions, 1,363 (596,331 weighted) real-world barrier midsection impacts selected from 13years (1997-2009) of in-depth crash data available through the National Automotive Sampling System (NASS) / Crashworthiness Data System (CDS) were analyzed. Scene diagram and available scene photographs were used to determine roadside and barrier specific variables unavailable in NASS/CDS. Binary logistic regression models were developed for second event occurrence and resulting driver injury. To investigate current secondary collision crash test criteria, 24 full-scale crash test reports were obtained for common non-proprietary US barriers, and the risk of secondary collisions was determined using recommended evaluation criteria from National Cooperative Highway Research Program (NCHRP) Report 350. Results: Secondary collisions were found to occur in approximately two thirds of crashes where a barrier is the first object struck. Barrier lateral stiffness, post-impact vehicle trajectory, vehicle type, and pre-impact tracking conditions were found to be statistically significant contributors to secondary event occurrence. The presence of a second event was found to increase the likelihood of a serious driver injury by a factor of 7 compared to cases with no second event present. The NCHRP Report 350 exit angle criterion was found to underestimate the risk of secondary collisions in real-world barrier crashes. Conclusions: Consistent with previous research, collisions following a barrier impact are not an infrequent event and substantially increase driver injury risk. The results suggest that using exit-angle based crash test criteria alone to assess secondary collision risk is not sufficient to predict second collision occurrence for real-world barrier crashes.