955 resultados para Reliable
Resumo:
In response to Chaski’s article (published in this volume) an examination is made of the methodological understanding necessary to identify dependable markers for forensic (and general) authorship attribution work. This examination concentrates on three methodological areas of concern which researchers intending to identify markers of authorship must address. These areas are sampling linguistic data, establishing the reliability of authorship markers and establishing the validity of authorship markers. It is suggested that the complexity of sampling problems in linguistic data is often underestimated and that theoretical issues in this area are both difficult and unresolved. It is further argued that the concepts of reliability and validity must be well understood and accounted for in any attempts to identify authorship markers and that largely this is not done. Finally, Principal Component Analysis is identified as an alternative approach which avoids some of the methodological problems inherent in identifying reliable, valid markers of authorship.
Resumo:
Quantitative structure–activity relationship (QSAR) analysis is a main cornerstone of modern informatic disciplines. Predictive computational models, based on QSAR technology, of peptide-major histocompatibility complex (MHC) binding affinity have now become a vital component of modern day computational immunovaccinology. Historically, such approaches have been built around semi-qualitative, classification methods, but these are now giving way to quantitative regression methods. The additive method, an established immunoinformatics technique for the quantitative prediction of peptide–protein affinity, was used here to identify the sequence dependence of peptide binding specificity for three mouse class I MHC alleles: H2–Db, H2–Kb and H2–Kk. As we show, in terms of reliability the resulting models represent a significant advance on existing methods. They can be used for the accurate prediction of T-cell epitopes and are freely available online (http://www.jenner.ac.uk/MHCPred).
Resumo:
We study noisy computation in randomly generated k-ary Boolean formulas. We establish bounds on the noise level above which the results of computation by random formulas are not reliable. This bound is saturated by formulas constructed from a single majority-like gate. We show that these gates can be used to compute any Boolean function reliably below the noise bound.
Resumo:
* The research has been partially supported by INFRAWEBS - IST FP62003/IST/2.3.2.3 Research Project No. 511723 and “Technologies of the Information Society for Knowledge Processing and Management” - IIT-BAS Research Project No. 010061.
Resumo:
People depend on various sources of information when trying to verify their autobiographical memories. Yet recent research shows that people prefer to use cheap-and-easy verification strategies, even when these strategies are not reliable. We examined the robustness of this cheap strategy bias, with scenarios designed to encourage greater emphasis on source reliability. In three experiments, subjects described real (Experiments 1 and 2) or hypothetical (Experiment 3) autobiographical events, and proposed strategies they might use to verify their memories of those events. Subjects also rated the reliability, cost, and the likelihood that they would use each strategy. In line with previous work, we found that the preference for cheap information held when people described how they would verify childhood or recent memories (Experiment 1); personally-important or trivial memories (Experiment 2), and even when the consequences of relying on incorrect information could be significant (Experiment 3). Taken together, our findings fit with an account of source monitoring in which the tendency to trust one’s own autobiographical memories can discourage people from systematically testing or accepting strong disconfirmatory evidence.
Resumo:
Despite marked gradients in nutrient availability that control the abundance and species composition of seagrasses in south Florida, and the importance of nutrient availability in controlling abundance and composition of epiphytes on seagrasses in other locations, we did not find that epiphyte load on the dominant seagrass, Thalassia testudinum, or that the relative contribution of algal epiphytes to the epiphyte community, was positively correlated with nutrient availability in the water column or the sediment in oligotrophic seagrass beds. Further, the abundance of microphytobenthos, as indicated by Chlorophyll-aconcentration in the sediments, was not directly correlated with concentrations of nutrients in the sediments. Our results suggest that epiphyte and microphytobenthos abundance are not unambiguous indicators of nutrient availability in relatively pristine seagrass environments, and therefore would make poor candidates for indicators of the status and trends of seagrass ecosystems in relatively low-nutrient environments like the Florida Keys.
Resumo:
With the growing commercial importance of the Internet and the development of new real-time, connection-oriented services like IP-telephony and electronic commerce resilience is becoming a key issue in the design of TP-based networks. Two emerging technologies, which can accomplish the task of efficient information transfer, are Multiprotocol Label Switching (MPLS) and Differentiated Services. A main benefit of MPLS is the ability to introduce traffic-engineering concepts due to its connection-oriented characteristic. With MPLS it is possible to assign different paths for packets through the network. Differentiated services divides traffic into different classes and treat them differently, especially when there is a shortage of network resources. In this thesis, a framework was proposed to integrate the above two technologies and its performance in providing load balancing and improving QoS was evaluated. Simulation and analysis of this framework demonstrated that the combination of MPLS and Differentiated services is a powerful tool for QoS provisioning in IP networks.
Resumo:
Thèse numérisée par la Direction des bibliothèques de l'Université de Montréal.
Resumo:
Purpose: To determine whether the ‘through-focus’ aberrations of a multifocal and accommodative intraocular lens (IOL) implanted patient can be used to provide rapid and reliable measures of their subjective range of clear vision. Methods: Eyes that had been implanted with a concentric (n = 8), segmented (n = 10) or accommodating (n = 6) intraocular lenses (mean age 62.9 ± 8.9 years; range 46-79 years) for over a year underwent simultaneous monocular subjective (electronic logMAR test chart at 4m with letters randomised between presentations) and objective (Aston open-field aberrometer) defocus curve testing for levels of defocus between +1.50 to -5.00DS in -0.50DS steps, in a randomised order. Pupil size and ocular aberration (a combination of the patient’s and the defocus inducing lens aberrations) at each level of blur was measured by the aberrometer. Visual acuity was measured subjectively at each level of defocus to determine the traditional defocus curve. Objective acuity was predicted using image quality metrics. Results: The range of clear focus differed between the three IOL types (F=15.506, P=0.001) as well as between subjective and objective defocus curves (F=6.685, p=0.049). There was no statistically significant difference between subjective and objective defocus curves in the segmented or concentric ring MIOL group (P>0.05). However a difference was found between the two measures and the accommodating IOL group (P<0.001). Mean Delta logMAR (predicted minus measured logMAR) across all target vergences was -0.06 ± 0.19 logMAR. Predicted logMAR defocus curves for the multifocal IOLs did not show a near vision addition peak, unlike the subjective measurement of visual acuity. However, there was a strong positive correlation between measured and predicted logMAR for all three IOLs (Pearson’s correlation: P<0.001). Conclusions: Current subjective procedures are lengthy and do not enable important additional measures such as defocus curves under differently luminance or contrast levels to be assessed, which may limit our understanding of MIOL performance in real-world conditions. In general objective aberrometry measures correlated well with the subjective assessment indicating the relative robustness of this technique in evaluating post-operative success with segmented and concentric ring MIOL.
Resumo:
Thèse numérisée par la Direction des bibliothèques de l'Université de Montréal.
Resumo:
Microneedles (MNs) are emerging devices that can be used for the delivery of drugs at specific locations1. Their performance is primarily judged by different features and the penetration through tissue is one of the most important aspects to evaluate. For detailed studies of MN performance different kind of in-vitro, exvivo and in-vivo tests should be performed. The main limitation of some of these tests is that biological tissue is too heterogeneous, unstable and difficult to obtain. In addition the use of biological materials sometimes present legal issues. There are many studies dealing with artificial membranes for drug diffusion2, but studies of artificial membranes for Microneedle mechanical characterization are scarce3. In order to overcome these limitations we have developed tests using synthetic polymeric membranes instead of biological tissue. The selected artificial membrane is homogeneous, stable, and readily available. This material is mainly composed of a roughly equal blend of a hydrocarbon wax and a polyolefin and it is commercially available under the brand name Parafilm®. The insertion of different kind of MN arrays prepared from crosslinked polymers were performed using this membrane and correlated with the insertion of the MN arrays in ex-vivo neonatal porcine skin. The insertion depth of the MNs was evaluated using Optical coherence tomography (OCT). The implementation of MN transdermal patches in the market can be improved by make this product user-friendly and easy to use. Therefore, manual insertion is preferred to other kind of procedures. Consequently, the insertion studies were performed in neonatal porcine skin and the artificial membrane using a manual insertion force applied by human volunteers. The insertion studies using manual forces correlated very well with the same studies performed with a Texture Analyzer equipment. These synthetic membranes seem to mimic closely the mechanical properties of the skin for the insertion of MNs using different methods of insertion. In conclusion, this artificial membrane substrate offers a valid alternative to biological tissue for the testing of MN insertion and can be a good candidate for developing a reliable quality control MN insertion test.
Resumo:
In cardiovascular disease the definition and the detection of the ECG parameters related to repolarization dynamics in post MI patients is still a crucial unmet need. In addition, the use of a 3D sensor in the implantable medical devices would be a crucial mean in the assessment or prediction of Heart Failure status, but the inclusion of such feature is limited by hardware and firmware constraints. The aim of this thesis is the definition of a reliable surrogate of the 500 Hz ECG signal to reach the aforementioned objective. To evaluate the worsening of reliability due to sampling frequency reduction on delineation performance, the signals have been consecutively down sampled by a factor 2, 4, 8 thus obtaining the ECG signals sampled at 250, 125 and 62.5 Hz, respectively. The final goal is the feasibility assessment of the detection of the fiducial points in order to translate those parameters into meaningful clinical parameter for Heart Failure prediction, such as T waves intervals heterogeneity and variability of areas under T waves. An experimental setting for data collection on healthy volunteers has been set up at the Bakken Research Center in Maastricht. A 16 – channel ambulatory system, provided by TMSI, has recorded the standard 12 – Leads ECG, two 3D accelerometers and a respiration sensor. The collection platform has been set up by the TMSI property software Polybench, the data analysis of such signals has been performed with Matlab. The main results of this study show that the 125 Hz sampling rate has demonstrated to be a good candidate for a reliable detection of fiducial points. T wave intervals proved to be consistently stable, even at 62.5 Hz. Further studies would be needed to provide a better comparison between sampling at 250 Hz and 125 Hz for areas under the T waves.
Resumo:
This work introduces a tessellation-based model for the declivity analysis of geographic regions. The analysis of the relief declivity, which is embedded in the rules of the model, categorizes each tessellation cell, with respect to the whole considered region, according to the (positive, negative, null) sign of the declivity of the cell. Such information is represented in the states assumed by the cells of the model. The overall configuration of such cells allows the division of the region into subregions of cells belonging to a same category, that is, presenting the same declivity sign. In order to control the errors coming from the discretization of the region into tessellation cells, or resulting from numerical computations, interval techniques are used. The implementation of the model is naturally parallel since the analysis is performed on the basis of local rules. An immediate application is in geophysics, where an adequate subdivision of geographic areas into segments presenting similar topographic characteristics is often convenient.
Resumo:
The Diversity Advisory Committee (DAC) will discuss the dynamics of the process of assessing the diversity health at the University of Maryland Libraries. From designing the survey instrument through analyzing the results to the final writing of the report of diversity and inclusion, the committee members will unveil their challenges and achievements in presenting unbiased conclusions from this assessment project. In completing this project, the committee consulted the university’s wisdom, including (1) the College of Information Studies for creating the survey; (2) the Office of Institutional Research, Planning and Assessment (IRPA), and Division of Information Technology (DIT) for analyzing the results; and (3) the Campus Assessment Working Group (CAWG) model for organizing the content of the final report.