912 resultados para Appearance-based methods
Resumo:
Annual average daily traffic (AADT) is important information for many transportation planning, design, operation, and maintenance activities, as well as for the allocation of highway funds. Many studies have attempted AADT estimation using factor approach, regression analysis, time series, and artificial neural networks. However, these methods are unable to account for spatially variable influence of independent variables on the dependent variable even though it is well known that to many transportation problems, including AADT estimation, spatial context is important. ^ In this study, applications of geographically weighted regression (GWR) methods to estimating AADT were investigated. The GWR based methods considered the influence of correlations among the variables over space and the spatially non-stationarity of the variables. A GWR model allows different relationships between the dependent and independent variables to exist at different points in space. In other words, model parameters vary from location to location and the locally linear regression parameters at a point are affected more by observations near that point than observations further away. ^ The study area was Broward County, Florida. Broward County lies on the Atlantic coast between Palm Beach and Miami-Dade counties. In this study, a total of 67 variables were considered as potential AADT predictors, and six variables (lanes, speed, regional accessibility, direct access, density of roadway length, and density of seasonal household) were selected to develop the models. ^ To investigate the predictive powers of various AADT predictors over the space, the statistics including local r-square, local parameter estimates, and local errors were examined and mapped. The local variations in relationships among parameters were investigated, measured, and mapped to assess the usefulness of GWR methods. ^ The results indicated that the GWR models were able to better explain the variation in the data and to predict AADT with smaller errors than the ordinary linear regression models for the same dataset. Additionally, GWR was able to model the spatial non-stationarity in the data, i.e., the spatially varying relationship between AADT and predictors, which cannot be modeled in ordinary linear regression. ^
Resumo:
In recent years, wireless communication infrastructures have been widely deployed for both personal and business applications. IEEE 802.11 series Wireless Local Area Network (WLAN) standards attract lots of attention due to their low cost and high data rate. Wireless ad hoc networks which use IEEE 802.11 standards are one of hot spots of recent network research. Designing appropriate Media Access Control (MAC) layer protocols is one of the key issues for wireless ad hoc networks. ^ Existing wireless applications typically use omni-directional antennas. When using an omni-directional antenna, the gain of the antenna in all directions is the same. Due to the nature of the Distributed Coordination Function (DCF) mechanism of IEEE 802.11 standards, only one of the one-hop neighbors can send data at one time. Nodes other than the sender and the receiver must be either in idle or listening state, otherwise collisions could occur. The downside of the omni-directionality of antennas is that the spatial reuse ratio is low and the capacity of the network is considerably limited. ^ It is therefore obvious that the directional antenna has been introduced to improve spatial reutilization. As we know, a directional antenna has the following benefits. It can improve transport capacity by decreasing interference of a directional main lobe. It can increase coverage range due to a higher SINR (Signal Interference to Noise Ratio), i.e., with the same power consumption, better connectivity can be achieved. And the usage of power can be reduced, i.e., for the same coverage, a transmitter can reduce its power consumption. ^ To utilizing the advantages of directional antennas, we propose a relay-enabled MAC protocol. Two relay nodes are chosen to forward data when the channel condition of direct link from the sender to the receiver is poor. The two relay nodes can transfer data at the same time and a pipelined data transmission can be achieved by using directional antennas. The throughput can be improved significant when introducing the relay-enabled MAC protocol. ^ Besides the strong points, directional antennas also have some explicit drawbacks, such as the hidden terminal and deafness problems and the requirements of retaining location information for each node. Therefore, an omni-directional antenna should be used in some situations. The combination use of omni-directional and directional antennas leads to the problem of configuring heterogeneous antennas, i e., given a network topology and a traffic pattern, we need to find a tradeoff between using omni-directional and using directional antennas to obtain a better network performance over this configuration. ^ Directly and mathematically establishing the relationship between the network performance and the antenna configurations is extremely difficult, if not intractable. Therefore, in this research, we proposed several clustering-based methods to obtain approximate solutions for heterogeneous antennas configuration problem, which can improve network performance significantly. ^ Our proposed methods consist of two steps. The first step (i.e., clustering links) is to cluster the links into different groups based on the matrix-based system model. After being clustered, the links in the same group have similar neighborhood nodes and will use the same type of antenna. The second step (i.e., labeling links) is to decide the type of antenna for each group. For heterogeneous antennas, some groups of links will use directional antenna and others will adopt omni-directional antenna. Experiments are conducted to compare the proposed methods with existing methods. Experimental results demonstrate that our clustering-based methods can improve the network performance significantly. ^
Resumo:
The accurate and reliable estimation of travel time based on point detector data is needed to support Intelligent Transportation System (ITS) applications. It has been found that the quality of travel time estimation is a function of the method used in the estimation and varies for different traffic conditions. In this study, two hybrid on-line travel time estimation models, and their corresponding off-line methods, were developed to achieve better estimation performance under various traffic conditions, including recurrent congestion and incidents. The first model combines the Mid-Point method, which is a speed-based method, with a traffic flow-based method. The second model integrates two speed-based methods: the Mid-Point method and the Minimum Speed method. In both models, the switch between travel time estimation methods is based on the congestion level and queue status automatically identified by clustering analysis. During incident conditions with rapidly changing queue lengths, shock wave analysis-based refinements are applied for on-line estimation to capture the fast queue propagation and recovery. Travel time estimates obtained from existing speed-based methods, traffic flow-based methods, and the models developed were tested using both simulation and real-world data. The results indicate that all tested methods performed at an acceptable level during periods of low congestion. However, their performances vary with an increase in congestion. Comparisons with other estimation methods also show that the developed hybrid models perform well in all cases. Further comparisons between the on-line and off-line travel time estimation methods reveal that off-line methods perform significantly better only during fast-changing congested conditions, such as during incidents. The impacts of major influential factors on the performance of travel time estimation, including data preprocessing procedures, detector errors, detector spacing, frequency of travel time updates to traveler information devices, travel time link length, and posted travel time range, were investigated in this study. The results show that these factors have more significant impacts on the estimation accuracy and reliability under congested conditions than during uncongested conditions. For the incident conditions, the estimation quality improves with the use of a short rolling period for data smoothing, more accurate detector data, and frequent travel time updates.
Resumo:
Reduced organic sulfur (ROS) compounds are environmentally ubiquitous and play an important role in sulfur cycling as well as in biogeochemical cycles of toxic metals, in particular mercury. Development of effective methods for analysis of ROS in environmental samples and investigations on the interactions of ROS with mercury are critical for understanding the role of ROS in mercury cycling, yet both of which are poorly studied. Covalent affinity chromatography-based methods were attempted for analysis of ROS in environmental water samples. A method was developed for analysis of environmental thiols, by preconcentration using affinity covalent chromatographic column or solid phase extraction, followed by releasing of thiols from the thiopropyl sepharose gel using TCEP and analysis using HPLC-UV or HPLC-FL. Under the optimized conditions, the detection limits of the method using HPLC-FL detection were 0.45 and 0.36 nM for Cys and GSH, respectively. Our results suggest that covalent affinity methods are efficient for thiol enrichment and interference elimination, demonstrating their promising applications in developing a sensitive, reliable, and useful technique for thiol analysis in environmental water samples. The dissolution of mercury sulfide (HgS) in the presence of ROS and dissolved organic matter (DOM) was investigated, by quantifying the effects of ROS on HgS dissolution and determining the speciation of the mercury released from ROS-induced HgS dissolution. It was observed that the presence of small ROS (e.g., Cys and GSH) and large molecule DOM, in particular at high concentrations, could significantly enhance the dissolution of HgS. The dissolved Hg during HgS dissolution determined using the conventional 0.22 μm cutoff method could include colloidal Hg (e.g., HgS colloids) and truly dissolved Hg (e.g., Hg-ROS complexes). A centrifugal filtration method (with 3 kDa MWCO) was employed to characterize the speciation and reactivity of the Hg released during ROS-enhanced HgS dissolution. The presence of small ROS could produce a considerable fraction (about 40% of total mercury in the solution) of truly dissolved mercury (< 3 kDa), probably due to the formation of Hg-Cys or Hg-GSH complexes. The truly dissolved Hg formed during GSH- or Cys-enhanced HgS dissolution was directly reducible (100% for GSH and 40% for Cys) by stannous chloride, demonstrating its potential role in Hg transformation and bioaccumulation.
Resumo:
The assessment of organic matter (OM) sources in sediments and soils is a key to better understand the biogeochemical cycling of carbon in aquatic environments. While traditional molecular marker-based methods have provided such information for typical two end member (allochthonous/terrestrial vs. autochthonous/microbial)-dominated systems, more detailed, biomass-specific assessments are needed for ecosystems with complex OM inputs such as tropical and sub-tropical wetlands and estuaries where aquatic macrophytes and macroalgae may play an important role as OM sources. The aim of this study was to assess the utility of a combined approach using compound specific stable carbon isotope analysis and an n-alkane based proxy (Paq) to differentiate submerged and emergent/terrestrial vegetation OM inputs to soils/sediments from a sub-tropical wetland and estuarine system, the Florida Coastal Everglades. Results show that Paq values (0.13–0.51) for the emergent/terrestrial plants were generally lower than those for freshwater/marine submerged vegetation (0.45–1.00) and that compound specific δ13C values for the n-alkanes (C23 to C31) were distinctively different for terrestrial/emergent and freshwater/marine submerged plants. While crossplots of the Paq and n-alkane stable isotope values for the C23n-alkane suggest that OM inputs are controlled by vegetation changes along the freshwater to marine transect, further resolution regarding OM input changes along this landscape was obtained through principal component analysis (PCA), successfully grouping the study sites according to the OM source strengths. The data show the potential for this n-alkane based multi-proxy approach as a means of assessing OM inputs to complex ecosystems.
Resumo:
The general method for determining organomercurials in environmental and biological samples is gas chromatography with electron capture detection (GC-ECD). However, tedious sample work up protocols and poor chromatographic response show the need for the development of new methods. Here, Atomic Fluorescence-based methods are described, free from these deficiencies. The organomercurials in soil, sediment and tissue samples are first released from the matrices with acidic KBr and cupric ions and extracted into dichloromethane. The initial extracts are subjected to thiosulfate clean up and the organomercury species are isolated as their chloride derivatives by cupric chloride and subsequent extraction into a small volume of dichloromethane. In water samples the organomercurials are pre-concentrated using a sulfhydryl cotton fiber adsorbent, followed by elution with acidic KBr and CuSO 4 and extraction into dichloromethane. Analysis of the organomercurials is accomplished by capillary column chromatography with atomic fluorescence detection.
Changing Bacterial Growth Efficiencies across a Natural Nutrient Gradient in an Oligotrophic Estuary
Resumo:
Recent studies have characterized coastal estuarine systems as important components of the global carbon cycle. This study investigated carbon cycling through the microbial loop of Florida Bay by use of bacterial growth efficiency calculations. Bacterial production, bacterial respiration, and other environmental parameters were measured at three sites located along a historic phosphorus-limitation gradient in Florida Bay and compared to a relatively nutrient enriched site in Biscayne Bay. A new method for measuring bacterial respiration in oligotrophic waters involving tracing respiration of 13C-glucose was developed. The results of the study indicate that 13C tracer assays may provide a better means of measuring bacterial respiration in low nutrient environments than traditional dissolved oxygen consumption-based methods due to strong correlations between incubation length and δ13C values. Results also suggest that overall bacterial growth efficiency may be lower at the most nutrient limited sites.
Resumo:
The purpose of this research to explore the use of modelling in the field of Purchasing and Supply Management (P/SM). We are particularly interested in identifying the specific areas of P/SM where there are opportunities for the use of modelling based methods. The paper starts with an overview of main types of modelling and also provides a categorisation of the main P/SM research themes. Our research shows that there are many opportunities for using descriptive, predictive and prescriptive modelling approaches in all areas of P/SM research from the ones with a focus on the actual function from a purely operational and execution perspective (e.g. purchasing processes and behaviour) to the ones with a focus on the organisational level from a more strategic perspective (e.g. strategy and policy). We conclude that future P/SM research needs to explore the value of modelling not just at the functional or operational level, but also at the organisation and strategic level respectively. We also acknowledge that while using empirical results to inform and improve models has advantages, there are also drawbacks, which relate to the value, the practical relevance and the generalisability of the modelling based approaches.
Resumo:
Marine mammals exploit the efficiency of sound propagation in the marine environment for essential activities like communication and navigation. For this reason, passive acoustics has particularly high potential for marine mammal studies, especially those aimed at population management and conservation. Despite the rapid realization of this potential through a growing number of studies, much crucial information remains unknown or poorly understood. This research attempts to address two key knowledge gaps, using the well-studied bottlenose dolphin (Tursiops truncatus) as a model species, and underwater acoustic recordings collected on four fixed autonomous sensors deployed at multiple locations in Sarasota Bay, Florida, between September 2012 and August 2013. Underwater noise can hinder dolphin communication. The ability of these animals to overcome this obstacle was examined using recorded noise and dolphin whistles. I found that bottlenose dolphins are able to compensate for increased noise in their environment using a wide range of strategies employed in a singular fashion or in various combinations, depending on the frequency content of the noise, noise source, and time of day. These strategies include modifying whistle frequency characteristics, increasing whistle duration, and increasing whistle redundancy. Recordings were also used to evaluate the performance of six recently developed passive acoustic abundance estimation methods, by comparing their results to the true abundance of animals, obtained via a census conducted within the same area and time period. The methods employed were broadly divided into two categories – those involving direct counts of animals, and those involving counts of cues (signature whistles). The animal-based methods were traditional capture-recapture, spatially explicit capture-recapture (SECR), and an approach that blends the “snapshot” method and mark-recapture distance sampling, referred to here as (SMRDS). The cue-based methods were conventional distance sampling (CDS), an acoustic modeling approach involving the use of the passive sonar equation, and SECR. In the latter approach, detection probability was modelled as a function of sound transmission loss, rather than the Euclidean distance typically used. Of these methods, while SMRDS produced the most accurate estimate, SECR demonstrated the greatest potential for broad applicability to other species and locations, with minimal to no auxiliary data, such as distance from sound source to detector(s), which is often difficult to obtain. This was especially true when this method was compared to traditional capture-recapture results, which greatly underestimated abundance, despite attempts to account for major unmodelled heterogeneity. Furthermore, the incorporation of non-Euclidean distance significantly improved model accuracy. The acoustic modelling approach performed similarly to CDS, but both methods also strongly underestimated abundance. In particular, CDS proved to be inefficient. This approach requires at least 3 sensors for localization at a single point. It was also difficult to obtain accurate distances, and the sample size was greatly reduced by the failure to detect some whistles on all three recorders. As a result, this approach is not recommended for marine mammal abundance estimation when few recorders are available, or in high sound attenuation environments with relatively low sample sizes. It is hoped that these results lead to more informed management decisions, and therefore, more effective species conservation.
Resumo:
X-ray computed tomography (CT) is a non-invasive medical imaging technique that generates cross-sectional images by acquiring attenuation-based projection measurements at multiple angles. Since its first introduction in the 1970s, substantial technical improvements have led to the expanding use of CT in clinical examinations. CT has become an indispensable imaging modality for the diagnosis of a wide array of diseases in both pediatric and adult populations [1, 2]. Currently, approximately 272 million CT examinations are performed annually worldwide, with nearly 85 million of these in the United States alone [3]. Although this trend has decelerated in recent years, CT usage is still expected to increase mainly due to advanced technologies such as multi-energy [4], photon counting [5], and cone-beam CT [6].
Despite the significant clinical benefits, concerns have been raised regarding the population-based radiation dose associated with CT examinations [7]. From 1980 to 2006, the effective dose from medical diagnostic procedures rose six-fold, with CT contributing to almost half of the total dose from medical exposure [8]. For each patient, the risk associated with a single CT examination is likely to be minimal. However, the relatively large population-based radiation level has led to enormous efforts among the community to manage and optimize the CT dose.
As promoted by the international campaigns Image Gently and Image Wisely, exposure to CT radiation should be appropriate and safe [9, 10]. It is thus a responsibility to optimize the amount of radiation dose for CT examinations. The key for dose optimization is to determine the minimum amount of radiation dose that achieves the targeted image quality [11]. Based on such principle, dose optimization would significantly benefit from effective metrics to characterize radiation dose and image quality for a CT exam. Moreover, if accurate predictions of the radiation dose and image quality were possible before the initiation of the exam, it would be feasible to personalize it by adjusting the scanning parameters to achieve a desired level of image quality. The purpose of this thesis is to design and validate models to quantify patient-specific radiation dose prospectively and task-based image quality. The dual aim of the study is to implement the theoretical models into clinical practice by developing an organ-based dose monitoring system and an image-based noise addition software for protocol optimization.
More specifically, Chapter 3 aims to develop an organ dose-prediction method for CT examinations of the body under constant tube current condition. The study effectively modeled the anatomical diversity and complexity using a large number of patient models with representative age, size, and gender distribution. The dependence of organ dose coefficients on patient size and scanner models was further evaluated. Distinct from prior work, these studies use the largest number of patient models to date with representative age, weight percentile, and body mass index (BMI) range.
With effective quantification of organ dose under constant tube current condition, Chapter 4 aims to extend the organ dose prediction system to tube current modulated (TCM) CT examinations. The prediction, applied to chest and abdominopelvic exams, was achieved by combining a convolution-based estimation technique that quantifies the radiation field, a TCM scheme that emulates modulation profiles from major CT vendors, and a library of computational phantoms with representative sizes, ages, and genders. The prospective quantification model is validated by comparing the predicted organ dose with the dose estimated based on Monte Carlo simulations with TCM function explicitly modeled.
Chapter 5 aims to implement the organ dose-estimation framework in clinical practice to develop an organ dose-monitoring program based on a commercial software (Dose Watch, GE Healthcare, Waukesha, WI). In the first phase of the study we focused on body CT examinations, and so the patient’s major body landmark information was extracted from the patient scout image in order to match clinical patients against a computational phantom in the library. The organ dose coefficients were estimated based on CT protocol and patient size as reported in Chapter 3. The exam CTDIvol, DLP, and TCM profiles were extracted and used to quantify the radiation field using the convolution technique proposed in Chapter 4.
With effective methods to predict and monitor organ dose, Chapters 6 aims to develop and validate improved measurement techniques for image quality assessment. Chapter 6 outlines the method that was developed to assess and predict quantum noise in clinical body CT images. Compared with previous phantom-based studies, this study accurately assessed the quantum noise in clinical images and further validated the correspondence between phantom-based measurements and the expected clinical image quality as a function of patient size and scanner attributes.
Chapter 7 aims to develop a practical strategy to generate hybrid CT images and assess the impact of dose reduction on diagnostic confidence for the diagnosis of acute pancreatitis. The general strategy is (1) to simulate synthetic CT images at multiple reduced-dose levels from clinical datasets using an image-based noise addition technique; (2) to develop quantitative and observer-based methods to validate the realism of simulated low-dose images; (3) to perform multi-reader observer studies on the low-dose image series to assess the impact of dose reduction on the diagnostic confidence for multiple diagnostic tasks; and (4) to determine the dose operating point for clinical CT examinations based on the minimum diagnostic performance to achieve protocol optimization.
Chapter 8 concludes the thesis with a summary of accomplished work and a discussion about future research.
Resumo:
The microbially mediated anaerobic oxidation of methane (AOM) is the major biological sink of the greenhouse gas methane in marine sediments (doi:10.1007/978-94-009-0213-8_44) and serves as an important control for emission of methane into the hydrosphere. The AOM metabolic process is assumed to be a reversal of methanogenesis coupled to the reduction of sulfate to sulfide involving methanotrophic archaea (ANME) and sulfate-reducing bacteria (SRB) as syntrophic partners which were describes amongst others in Boetius et al. (2000; doi:10.1038/35036572). In this study, 16S rRNA-based methods were used to investigate the distribution and biomass of archaea in samples from sediments above outcropping methane hydrate at Hydrate Ridge (Cascadia margin off Oregon) and (ii) massive microbial mats enclosing carbonate reefs (Crimea area, Black Sea). Sediment samples from Hydrate Ridge were obtained during R/V SONNE cruises SO143-2 in August 1999 and SO148-1 in August 2000 at the crest of southern Hydrate Ridge at the Cascadia convergent margin off the coast of Oregon. The second study area is located in the Black Sea and represents a field in which there is active seepage of free gas on the slope of the northwestern Crimea area. Here, a field of conspicuous microbial reefs forming chimney-like structures was discovered at a water depth of 230 m in anoxic waters. The microbial mats were sampled by using the manned submersible JAGO during the R/V Prof. LOGACHEV cruise in July 2001. At Hydrate Ridge the surface sediments were dominated by aggregates consisting of ANME-2 and members of the Desulfosarcina-Desulfococcus branch (DSS) (ANME-2/DSS aggregates), which accounted for >90% of the total cell biomass. The numbers of ANME-1 cells increased strongly with depth; these cells accounted 1% of all single cells at the surface and more than 30% of all single cells (5% of the total cells) in 7- to 10-cm sediment horizons that were directly above layers of gas hydrate. In the Black Sea microbial mats ANME-1 accounted for about 50% of all cells. ANME-2/DSS aggregates occurred in microenvironments within the mat but accounted for only 1% of the total cells. FISH probes for the ANME-2a and ANME-2c subclusters were designed based on a comparative 16S rRNA analysis. In Hydrate Ridge sediments ANME-2a/DSS and ANME-2c/DSS aggregates differed significantly in morphology and abundance. The relative abundance values for these subgroups were remarkably different at Beggiatoa sites (80% ANME-2a, 20% ANME-2c) and Calyptogena sites (20% ANME-2a, 80% ANME-2c), indicating that there was preferential selection of the groups in the two habitats.
Resumo:
We calculate net community production (NCP) during summer 2005-2006 and spring 2006 in the Ross Sea using multiple approaches to determine the magnitude and consistency of rates. Water column carbon and nutrient inventories and surface ocean O2/Ar data are compared to satellite-derived primary productivity (PP) estimates and 14C uptake experiments. In spring, NCP was related to stratification proximal to upper ocean fronts. In summer, the most intense C drawdown was in shallow mixed layers affected by ice melt; depth-integrated C drawdown, however, increased with mixing depth. Delta O2/Ar-based methods, relying on gas exchange reconstructions, underestimate NCP due to seasonal variations in surface Delta O2/Ar and NCP rates. Mixed layer Delta O2/Ar requires approximately 60 days to reach steady state, starting from early spring. Additionally, cold temperatures prolong the sensitivity of gas exchange reconstructions to past NCP variability. Complex vertical structure, in addition to the seasonal cycle, affects interpretations of surface-based observations, including those made from satellites. During both spring and summer, substantial fractions of NCP were below the mixed layer. Satellite-derived estimates tended to overestimate PP relative to 14C-based estimates, most severely in locations of stronger upper water column stratification. Biases notwithstanding, NCP-PP comparisons indicated that community respiration was of similar magnitude to NCP. We observed that a substantial portion of NCP remained as suspended particulate matter in the upper water column, demonstrating a lag between production and export. Resolving the dynamic physical processes that structure variance in NCP and its fate will enhance the understanding of the carbon cycling in highly productive Antarctic environments.
Resumo:
Abstract Molecular probe-based methods (Fluorescent in-situ hybridisation or FISH, Next Generation Sequencing or NGS) have proved successful in improving both the efficiency and accuracy of the identification of microorganisms, especially those that lack distinct morphological features, such as picoplankton. However, FISH methods have the major drawback that they can only identify one or just a few species at a time because of the reduced number of available fluorochromes that can be added to the probe. Although the length of sequence that can be obtained is continually improving, NGS still requires a great deal of handling time, its analysis time is still months and with a PCR step it will always be sensitive to natural enzyme inhibitors. With the use of DNA microarrays, it is possible to identify large numbers of taxa on a single-glass slide, the so-called phylochip, which can be semi-quantitative. This review details the major steps in probe design, design and production of a phylochip and validation of the array. Finally, major microarray studies in the phytoplankton community are reviewed to demonstrate the scope of the method.
Resumo:
Abstract Molecular probe-based methods (Fluorescent in-situ hybridisation or FISH, Next Generation Sequencing or NGS) have proved successful in improving both the efficiency and accuracy of the identification of microorganisms, especially those that lack distinct morphological features, such as picoplankton. However, FISH methods have the major drawback that they can only identify one or just a few species at a time because of the reduced number of available fluorochromes that can be added to the probe. Although the length of sequence that can be obtained is continually improving, NGS still requires a great deal of handling time, its analysis time is still months and with a PCR step it will always be sensitive to natural enzyme inhibitors. With the use of DNA microarrays, it is possible to identify large numbers of taxa on a single-glass slide, the so-called phylochip, which can be semi-quantitative. This review details the major steps in probe design, design and production of a phylochip and validation of the array. Finally, major microarray studies in the phytoplankton community are reviewed to demonstrate the scope of the method.
Resumo:
The article presents a study of a CEFR B2-level reading subtest that is part of the Slovenian national secondary school leaving examination in English as a foreign language, and compares the test-taker actual performance (objective difficulty) with the test-taker and expert perceptions of item difficulty (subjective difficulty). The study also analyses the test-takers’ comments on item difficulty obtained from a while-reading questionnaire. The results are discussed in the framework of the existing research in the fields of (the assessment of) reading comprehension, and are addressed with regard to their implications for item-writing, FL teaching and curriculum development.