919 resultados para Superiority and Inferiority Multi-criteria Ranking (SIR) Method


Relevância:

100.00% 100.00%

Publicador:

Resumo:

Background. The prevalence of early childhood caries (ECC) is high in developing countries; thus, sensitive methods for the early diagnosis of ECC are of prime importance to implement the appropriate preventive measures. Aim. To investigate the effects of the addition of early caries lesions (ECL) into WHO threshold caries detection methods on the prevalence of caries in primary teeth and the epidemiological profile of the studied population. Design. In total, 351 3-to 4-year-old preschoolers participated in this cross-sectional study. Clinical exams were conducted by one calibrated examiner using WHO and WHO + ECL criteria. During the exams, a mirror, a ball-ended probe, gauze, and an artificial light were used. The data were analysed by Wilcoxon and Mc-Nemar's tests (a = 0.05). Results. Good intra-examiner Kappa values at tooth /surface levels were obtained for WHO and WHO + ECL criteria (0.93 /0.87 and 0.75 /0.78, respectively). The dmfs scores were significantly higher (P < 0.05) when WHO + ECL criteria were used. ECLs were the predominant caries lesions in the majority of teeth. Conclusions. The results strongly suggest that the WHO + ECL diagnosis method could be used to identify ECL in young children under field conditions, increasing the prevalence and classification of caries activity and providing valuable information for the early establishment of preventive measures.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This work presents exact, hybrid algorithms for mixed resource Allocation and Scheduling problems; in general terms, those consist into assigning over time finite capacity resources to a set of precedence connected activities. The proposed methods have broad applicability, but are mainly motivated by applications in the field of Embedded System Design. In particular, high-performance embedded computing recently witnessed the shift from single CPU platforms with application-specific accelerators to programmable Multi Processor Systems-on-Chip (MPSoCs). Those allow higher flexibility, real time performance and low energy consumption, but the programmer must be able to effectively exploit the platform parallelism. This raises interest in the development of algorithmic techniques to be embedded in CAD tools; in particular, given a specific application and platform, the objective if to perform optimal allocation of hardware resources and to compute an execution schedule. On this regard, since embedded systems tend to run the same set of applications for their entire lifetime, off-line, exact optimization approaches are particularly appealing. Quite surprisingly, the use of exact algorithms has not been well investigated so far; this is in part motivated by the complexity of integrated allocation and scheduling, setting tough challenges for ``pure'' combinatorial methods. The use of hybrid CP/OR approaches presents the opportunity to exploit mutual advantages of different methods, while compensating for their weaknesses. In this work, we consider in first instance an Allocation and Scheduling problem over the Cell BE processor by Sony, IBM and Toshiba; we propose three different solution methods, leveraging decomposition, cut generation and heuristic guided search. Next, we face Allocation and Scheduling of so-called Conditional Task Graphs, explicitly accounting for branches with outcome not known at design time; we extend the CP scheduling framework to effectively deal with the introduced stochastic elements. Finally, we address Allocation and Scheduling with uncertain, bounded execution times, via conflict based tree search; we introduce a simple and flexible time model to take into account duration variability and provide an efficient conflict detection method. The proposed approaches achieve good results on practical size problem, thus demonstrating the use of exact approaches for system design is feasible. Furthermore, the developed techniques bring significant contributions to combinatorial optimization methods.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This thesis deal with the design of advanced OFDM systems. Both waveform and receiver design have been treated. The main scope of the Thesis is to study, create, and propose, ideas and novel design solutions able to cope with the weaknesses and crucial aspects of modern OFDM systems. Starting from the the transmitter side, the problem represented by low resilience to non-linear distortion has been assessed. A novel technique that considerably reduces the Peak-to-Average Power Ratio (PAPR) yielding a quasi constant signal envelope in the time domain (PAPR close to 1 dB) has been proposed.The proposed technique, named Rotation Invariant Subcarrier Mapping (RISM),is a novel scheme for subcarriers data mapping,where the symbols belonging to the modulation alphabet are not anchored, but maintain some degrees of freedom. In other words, a bit tuple is not mapped on a single point, rather it is mapped onto a geometrical locus, which is totally or partially rotation invariant. The final positions of the transmitted complex symbols are chosen by an iterative optimization process in order to minimize the PAPR of the resulting OFDM symbol. Numerical results confirm that RISM makes OFDM usable even in severe non-linear channels. Another well known problem which has been tackled is the vulnerability to synchronization errors. Indeed in OFDM system an accurate recovery of carrier frequency and symbol timing is crucial for the proper demodulation of the received packets. In general, timing and frequency synchronization is performed in two separate phases called PRE-FFT and POST-FFT synchronization. Regarding the PRE-FFT phase, a novel joint symbol timing and carrier frequency synchronization algorithm has been presented. The proposed algorithm is characterized by a very low hardware complexity, and, at the same time, it guarantees very good performance in in both AWGN and multipath channels. Regarding the POST-FFT phase, a novel approach for both pilot structure and receiver design has been presented. In particular, a novel pilot pattern has been introduced in order to minimize the occurrence of overlaps between two pattern shifted replicas. This allows to replace conventional pilots with nulls in the frequency domain, introducing the so called Silent Pilots. As a result, the optimal receiver turns out to be very robust against severe Rayleigh fading multipath and characterized by low complexity. Performance of this approach has been analytically and numerically evaluated. Comparing the proposed approach with state of the art alternatives, in both AWGN and multipath fading channels, considerable performance improvements have been obtained. The crucial problem of channel estimation has been thoroughly investigated, with particular emphasis on the decimation of the Channel Impulse Response (CIR) through the selection of the Most Significant Samples (MSSs). In this contest our contribution is twofold, from the theoretical side, we derived lower bounds on the estimation mean-square error (MSE) performance for any MSS selection strategy,from the receiver design we proposed novel MSS selection strategies which have been shown to approach these MSE lower bounds, and outperformed the state-of-the-art alternatives. Finally, the possibility of using of Single Carrier Frequency Division Multiple Access (SC-FDMA) in the Broadband Satellite Return Channel has been assessed. Notably, SC-FDMA is able to improve the physical layer spectral efficiency with respect to single carrier systems, which have been used so far in the Return Channel Satellite (RCS) standards. However, it requires a strict synchronization and it is also sensitive to phase noise of local radio frequency oscillators. For this reason, an effective pilot tone arrangement within the SC-FDMA frame, and a novel Joint Multi-User (JMU) estimation method for the SC-FDMA, has been proposed. As shown by numerical results, the proposed scheme manages to satisfy strict synchronization requirements and to guarantee a proper demodulation of the received signal.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

La ricerca proposta si pone l’obiettivo di definire e sperimentare un metodo per un’articolata e sistematica lettura del territorio rurale, che, oltre ad ampliare la conoscenza del territorio, sia di supporto ai processi di pianificazione paesaggistici ed urbanistici e all’attuazione delle politiche agricole e di sviluppo rurale. Un’approfondita disamina dello stato dell’arte riguardante l’evoluzione del processo di urbanizzazione e le conseguenze dello stesso in Italia e in Europa, oltre che del quadro delle politiche territoriali locali nell’ambito del tema specifico dello spazio rurale e periurbano, hanno reso possibile, insieme a una dettagliata analisi delle principali metodologie di analisi territoriale presenti in letteratura, la determinazione del concept alla base della ricerca condotta. E’ stata sviluppata e testata una metodologia multicriteriale e multilivello per la lettura del territorio rurale sviluppata in ambiente GIS, che si avvale di algoritmi di clustering (quale l’algoritmo IsoCluster) e classificazione a massima verosimiglianza, focalizzando l’attenzione sugli spazi agricoli periurbani. Tale metodo si incentra sulla descrizione del territorio attraverso la lettura di diverse componenti dello stesso, quali quelle agro-ambientali e socio-economiche, ed opera una sintesi avvalendosi di una chiave interpretativa messa a punto allo scopo, l’Impronta Agroambientale (Agro-environmental Footprint - AEF), che si propone di quantificare il potenziale impatto degli spazi rurali sul sistema urbano. In particolare obiettivo di tale strumento è l’identificazione nel territorio extra-urbano di ambiti omogenei per caratteristiche attraverso una lettura del territorio a differenti scale (da quella territoriale a quella aziendale) al fine di giungere ad una sua classificazione e quindi alla definizione delle aree classificabili come “agricole periurbane”. La tesi propone la presentazione dell’architettura complessiva della metodologia e la descrizione dei livelli di analisi che la compongono oltre che la successiva sperimentazione e validazione della stessa attraverso un caso studio rappresentativo posto nella Pianura Padana (Italia).

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Eosinophilia is an important indicator of various neoplastic and nonneoplastic conditions. Depending on the underlying disease and mechanisms, eosinophil infiltration can lead to organ dysfunction, clinical symptoms, or both. During the past 2 decades, several different classifications of eosinophilic disorders and related syndromes have been proposed in various fields of medicine. Although criteria and definitions are, in part, overlapping, no global consensus has been presented to date. The Year 2011 Working Conference on Eosinophil Disorders and Syndromes was organized to update and refine the criteria and definitions for eosinophilic disorders and to merge prior classifications in a contemporary multidisciplinary schema. A panel of experts from the fields of immunology, allergy, hematology, and pathology contributed to this project. The expert group agreed on unifying terminologies and criteria and a classification that delineates various forms of hypereosinophilia, including primary and secondary variants based on specific hematologic and immunologic conditions, and various forms of the hypereosinophilic syndrome. For patients in whom no underlying disease or hypereosinophilic syndrome is found, the term hypereosinophilia of undetermined significance is introduced. The proposed novel criteria, definitions, and terminologies should assist in daily practice, as well as in the preparation and conduct of clinical trials.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Background Basic symptom (BS) criteria have been suggested to complement ultra-high risk (UHR) criteria in the early detection of psychosis in adults and in children and adolescents. To account for potential developmental particularities and a different clustering of BS in children and adolescents, the Schizophrenia Proneness Instrument, Child and Youth version (SPI-CY) was developed. Aims The SPI-CY was evaluated for its practicability and discriminative validity. Method The SPI-CY was administered to 3 groups of children and adolescents (mean age 16; range=8–18; 61% male): 23 at-risk patients meeting UHR and/or BS criteria (AtRisk), 22 clinical controls (CC), and 19 children and adolescents from the general population (GPS) matched to AtRisk in age, gender, and education. We expected AtRisk to score highest on the SPI-CY, and GPS lowest. Results The groups differed significantly on all 4 SPI-CY subscales. Pairwise post-hoc comparisons confirmed our expectations for all subscales and, at least on a descriptive level, most items. Pairwise subscale differences indicated at least moderate group effects (r≥0.37) which were largest for Adynamia (0.52≤r≥0.70). Adynamia also performed excellent to outstanding in ROC analyses (0.813≤AUC≥0.981). Conclusion The SPI-CY could be a helpful tool for detecting and assessing BS in the psychosis spectrum in children and adolescents, by whom it was well received. Furthermore, its subscales possess good discriminative validity. However, these results require validation in a larger sample, and the psychosis-predictive ability of the subscales in different age groups, especially the role of Adynamia, will have to be explored in longitudinal studies.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

To mitigate greenhouse gas (GHG) emissions and reduce U.S. dependence on imported oil, the United States (U.S.) is pursuing several options to create biofuels from renewable woody biomass (hereafter referred to as “biomass”). Because of the distributed nature of biomass feedstock, the cost and complexity of biomass recovery operations has significant challenges that hinder increased biomass utilization for energy production. To facilitate the exploration of a wide variety of conditions that promise profitable biomass utilization and tapping unused forest residues, it is proposed to develop biofuel supply chain models based on optimization and simulation approaches. The biofuel supply chain is structured around four components: biofuel facility locations and sizes, biomass harvesting/forwarding, transportation, and storage. A Geographic Information System (GIS) based approach is proposed as a first step for selecting potential facility locations for biofuel production from forest biomass based on a set of evaluation criteria, such as accessibility to biomass, railway/road transportation network, water body and workforce. The development of optimization and simulation models is also proposed. The results of the models will be used to determine (1) the number, location, and size of the biofuel facilities, and (2) the amounts of biomass to be transported between the harvesting areas and the biofuel facilities over a 20-year timeframe. The multi-criteria objective is to minimize the weighted sum of the delivered feedstock cost, energy consumption, and GHG emissions simultaneously. Finally, a series of sensitivity analyses will be conducted to identify the sensitivity of the decisions, such as the optimal site selected for the biofuel facility, to changes in influential parameters, such as biomass availability and transportation fuel price. Intellectual Merit The proposed research will facilitate the exploration of a wide variety of conditions that promise profitable biomass utilization in the renewable biofuel industry. The GIS-based facility location analysis considers a series of factors which have not been considered simultaneously in previous research. Location analysis is critical to the financial success of producing biofuel. The modeling of woody biomass supply chains using both optimization and simulation, combing with the GIS-based approach as a precursor, have not been done to date. The optimization and simulation models can help to ensure the economic and environmental viability and sustainability of the entire biofuel supply chain at both the strategic design level and the operational planning level. Broader Impacts The proposed models for biorefineries can be applied to other types of manufacturing or processing operations using biomass. This is because the biomass feedstock supply chain is similar, if not the same, for biorefineries, biomass fired or co-fired power plants, or torrefaction/pelletization operations. Additionally, the research results of this research will continue to be disseminated internationally through publications in journals, such as Biomass and Bioenergy, and Renewable Energy, and presentations at conferences, such as the 2011 Industrial Engineering Research Conference. For example, part of the research work related to biofuel facility identification has been published: Zhang, Johnson and Sutherland [2011] (see Appendix A). There will also be opportunities for the Michigan Tech campus community to learn about the research through the Sustainable Future Institute.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

With proper application of Best Management Practices (BMPs), the impact from the sediment to the water bodies could be minimized. However, finding the optimal allocation of BMP can be difficult, since there are numerous possible options. Also, economics plays an important role in BMP affordability and, therefore, the number of BMPs able to be placed in a given budget year. In this study, two methodologies are presented to determine the optimal cost-effective BMP allocation, by coupling a watershed-level model, Soil and Water Assessment Tool (SWAT), with two different methods, targeting and a multi-objective genetic algorithm (Non-dominated Sorting Genetic Algorithm II, NSGA-II). For demonstration, these two methodologies were applied to an agriculture-dominant watershed located in Lower Michigan to find the optimal allocation of filter strips and grassed waterways. For targeting, three different criteria were investigated for sediment yield minimization, during the process of which it was found that the grassed waterways near the watershed outlet reduced the watershed outlet sediment yield the most under this study condition, and cost minimization was also included as a second objective during the cost-effective BMP allocation selection. NSGA-II was used to find the optimal BMP allocation for both sediment yield reduction and cost minimization. By comparing the results and computational time of both methodologies, targeting was determined to be a better method for finding optimal cost-effective BMP allocation under this study condition, since it provided more than 13 times the amount of solutions with better fitness for the objective functions while using less than one eighth of the SWAT computational time than the NSGA-II with 150 generations did.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

PURPOSE: The aim of this retrospective study was to determine optimal duplex sonographic criteria for use in our institution for diagnosing severe carotid stenoses and to correlate those findings with angiographic measurements obtained by the European Carotid Surgery Trial (ECST), North American Symptomatic Carotid Endarterectomy Trial (NASCET), and Common Carotid (CC) methods of grading carotid stenoses. METHODS: We analyzed the angiographic data using the ECST, NASCET, and CC methods and compared the results with the duplex sonographic findings. We then calculated the sensitivity, specificity, positive and negative predictive values, and accuracy of the duplex sonographic method. Taking these parameters into account, the optimal intrastenotic peak systolic velocity (PSV) and end diastolic velocity (EDV) were derived for diagnosing severe stenoses according to the 3 angiographic methods. RESULTS: Optimal PSV and EDV values for diagnosing a 70% or greater stenosis in our laboratory were as follows: with the NASCET method of angiographic grading of stenoses, PSV 220 cm/second or greater and EDV 80 cm/second or greater, and with the ECST and CC methods, PSV 190 cm/second or greater, and EDV 65 cm/second or greater. The optimal PSV and EDV for diagnosing a stenosis of 80% or greater with the ECST grading method were 215 cm/second or greater and 90 cm/second or greater, respectively. CONCLUSIONS: Duplex sonography is a sensitive and accurate tool for evaluating severe carotid stenoses. Optimal PSVs and EDVs vary according to the angiographic method used to grade the stenosis. They are similar for stenoses 70% or greater with the NASCET method and for stenoses 80% or greater with the ECST method.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The objective of this retrospective study was to assess image quality with pulmonary CT angiography (CTA) using 80 kVp and to find anthropomorphic parameters other than body weight (BW) to serve as selection criteria for low-dose CTA. Attenuation in the pulmonary arteries, anteroposterior and lateral diameters, cross-sectional area and soft-tissue thickness of the chest were measured in 100 consecutive patients weighing less than 100 kg with 80 kVp pulmonary CTA. Body surface area (BSA) and contrast-to-noise ratios (CNR) were calculated. Three radiologists analyzed arterial enhancement, noise, and image quality. Image parameters between patients grouped by BW (group 1: 0-50 kg; groups 2-6: 51-100 kg, decadally increasing) were compared. CNR was higher in patients weighing less than 60 kg than in the BW groups 71-99 kg (P between 0.025 and <0.001). Subjective ranking of enhancement (P = 0.165-0.605), noise (P = 0.063), and image quality (P = 0.079) did not differ significantly across all patient groups. CNR correlated moderately strongly with weight (R = -0.585), BSA (R = -0.582), cross-sectional area (R = -0.544), and anteroposterior diameter of the chest (R = -0.457; P < 0.001 all parameters). We conclude that 80 kVp pulmonary CTA permits diagnostic image quality in patients weighing up to 100 kg. Body weight is a suitable criterion to select patients for low-dose pulmonary CTA.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This study describes the development and validation of a gas chromatography-mass spectrometry (GC-MS) method to identify and quantitate phenytoin in brain microdialysate, saliva and blood from human samples. A solid-phase extraction (SPE) was performed with a nonpolar C8-SCX column. The eluate was evaporated with nitrogen (50°C) and derivatized with trimethylsulfonium hydroxide before GC-MS analysis. As the internal standard, 5-(p-methylphenyl)-5-phenylhydantoin was used. The MS was run in scan mode and the identification was made with three ion fragment masses. All peaks were identified with MassLib. Spiked phenytoin samples showed recovery after SPE of ≥94%. The calibration curve (phenytoin 50 to 1,200 ng/mL, n = 6, at six concentration levels) showed good linearity and correlation (r² > 0.998). The limit of detection was 15 ng/mL; the limit of quantification was 50 ng/mL. Dried extracted samples were stable within a 15% deviation range for ≥4 weeks at room temperature. The method met International Organization for Standardization standards and was able to detect and quantify phenytoin in different biological matrices and patient samples. The GC-MS method with SPE is specific, sensitive, robust and well reproducible, and is therefore an appropriate candidate for the pharmacokinetic assessment of phenytoin concentrations in different human biological samples.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Objective Interruptions are known to have a negative impact on activity performance. Understanding how an interruption contributes to human error is limited because there is not a standard method for analyzing and classifying interruptions. Qualitative data are typically analyzed by either a deductive or an inductive method. Both methods have limitations. In this paper a hybrid method was developed that integrates deductive and inductive methods for the categorization of activities and interruptions recorded during an ethnographic study of physicians and registered nurses in a Level One Trauma Center. Understanding the effects of interruptions is important for designing and evaluating informatics tools in particular and for improving healthcare quality and patient safety in general. Method The hybrid method was developed using a deductive a priori classification framework with the provision of adding new categories discovered inductively in the data. The inductive process utilized line-by-line coding and constant comparison as stated in Grounded Theory. Results The categories of activities and interruptions were organized into a three-tiered hierarchy of activity. Validity and reliability of the categories were tested by categorizing a medical error case external to the study. No new categories of interruptions were identified during analysis of the medical error case. Conclusions Findings from this study provide evidence that the hybrid model of categorization is more complete than either a deductive or an inductive method alone. The hybrid method developed in this study provides the methodical support for understanding, analyzing, and managing interruptions and workflow.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

OBJECTIVE: Interruptions are known to have a negative impact on activity performance. Understanding how an interruption contributes to human error is limited because there is not a standard method for analyzing and classifying interruptions. Qualitative data are typically analyzed by either a deductive or an inductive method. Both methods have limitations. In this paper, a hybrid method was developed that integrates deductive and inductive methods for the categorization of activities and interruptions recorded during an ethnographic study of physicians and registered nurses in a Level One Trauma Center. Understanding the effects of interruptions is important for designing and evaluating informatics tools in particular as well as improving healthcare quality and patient safety in general. METHOD: The hybrid method was developed using a deductive a priori classification framework with the provision of adding new categories discovered inductively in the data. The inductive process utilized line-by-line coding and constant comparison as stated in Grounded Theory. RESULTS: The categories of activities and interruptions were organized into a three-tiered hierarchy of activity. Validity and reliability of the categories were tested by categorizing a medical error case external to the study. No new categories of interruptions were identified during analysis of the medical error case. CONCLUSIONS: Findings from this study provide evidence that the hybrid model of categorization is more complete than either a deductive or an inductive method alone. The hybrid method developed in this study provides the methodical support for understanding, analyzing, and managing interruptions and workflow.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The responses of carbon dioxide (CO2) and other climate variables to an emission pulse of CO2 into the atmosphere are often used to compute the Global Warming Potential (GWP) and Global Temperature change Potential (GTP), to characterize the response timescales of Earth System models, and to build reduced-form models. In this carbon cycle-climate model intercomparison project, which spans the full model hierarchy, we quantify responses to emission pulses of different magnitudes injected under different conditions. The CO2 response shows the known rapid decline in the first few decades followed by a millennium-scale tail. For a 100 Gt-C emission pulse added to a constant CO2 concentration of 389 ppm, 25 ± 9% is still found in the atmosphere after 1000 yr; the ocean has absorbed 59 ± 12% and the land the remainder (16 ± 14%). The response in global mean surface air temperature is an increase by 0.20 ± 0.12 °C within the first twenty years; thereafter and until year 1000, temperature decreases only slightly, whereas ocean heat content and sea level continue to rise. Our best estimate for the Absolute Global Warming Potential, given by the time-integrated response in CO2 at year 100 multiplied by its radiative efficiency, is 92.5 × 10−15 yr W m−2 per kg-CO2. This value very likely (5 to 95% confidence) lies within the range of (68 to 117) × 10−15 yr W m−2 per kg-CO2. Estimates for time-integrated response in CO2 published in the IPCC First, Second, and Fourth Assessment and our multi-model best estimate all agree within 15% during the first 100 yr. The integrated CO2 response, normalized by the pulse size, is lower for pre-industrial conditions, compared to present day, and lower for smaller pulses than larger pulses. In contrast, the response in temperature, sea level and ocean heat content is less sensitive to these choices. Although, choices in pulse size, background concentration, and model lead to uncertainties, the most important and subjective choice to determine AGWP of CO2 and GWP is the time horizon.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Objective: Section III of the Diagnostic and Statistical Manual of Mental Disorders, Fifth Edition (DSM-5) lists attenuated psychosis syndrome as a condition for further study. One important question is its prevalence and clinical significance in the general population. Method: Analyses involved 1229 participants (age 16-40 years) from the general population of Canton Bern, Switzerland, enrolled from June 2011 to July 2012. "Symptom," "onset/worsening," "frequency," and "distress/disability" criteria of attenuated psychosis syndrome were assessed using the structured interview for psychosis-risk syndromes. Furthermore, help-seeking, psychosocial functioning, and current nonpsychotic axis I disorders were surveyed. Well-trained psychologists performed assessments using the computer-assisted telephone interviewing technique. Results: The symptom criterion was met by 12.9% of participants, onset/worsening by 1.1%, frequency by 3.8%, and distress/disability by 7.0%. Symptom, frequency, and distress/disability were met by 3.2%. Excluding trait-like attenuated psychotic symptoms (APS) decreased the prevalence to 2.6%, while adding onset/worsening reduced it to 0.3%. APS were associated with functional impairments, current mental disorders, and help-seeking although they were not a reason for help-seeking. These associations were weaker for attenuated psychosis syndrome. Conclusions: At the population level, only 0.3% met current attenuated psychosis syndrome criteria. Particularly, the onset/worsening criterion, originally included to increase the likelihood of progression to psychosis, lowered its prevalence. Because progression is not required for a self-contained syndrome, a revision of the restrictive onset criterion is proposed to avoid the exclusion of 2.3% of persons who experience and are distressed by APS from mental health care. Secondary analyses suggest that a revised syndrome would also possess higher clinical significance than the current syndrome.