946 resultados para Driver Performance Testing.
Resumo:
OBJECTIVE: The frequent occurrence of inconclusive serology in blood banks and the absence of a gold standard test for Chagas'disease led us to examine the efficacy of the blood culture test and five commercial tests (ELISA, IIF, HAI, c-ELISA, rec-ELISA) used in screening blood donors for Chagas disease, as well as to investigate the prevalence of Trypanosoma cruzi infection among donors with inconclusive serology screening in respect to some epidemiological variables. METHODS: To obtain estimates of interest we considered a Bayesian latent class model with inclusion of covariates from the logit link. RESULTS: A better performance was observed with some categories of epidemiological variables. In addition, all pairs of tests (excluding the blood culture test) presented as good alternatives for both screening (sensitivity > 99.96% in parallel testing) and for confirmation (specificity > 99.93% in serial testing) of Chagas disease. The prevalence of 13.30% observed in the stratum of donors with inconclusive serology, means that probably most of these are non-reactive serology. In addition, depending on the level of specific epidemiological variables, the absence of infection can be predicted with a probability of 100% in this group from the pairs of tests using parallel testing. CONCLUSION: The epidemiological variables can lead to improved test results and thus assist in the clarification of inconclusive serology screening results. Moreover, all combinations of pairs using the five commercial tests are good alternatives to confirm results.
Testing phenomenological and theoretical models of dark matter density profiles with galaxy clusters
Resumo:
We use the stacked gravitational lensingmass profile of four high-mass (M 1015M ) galaxy clusters around z≈0.3 from Umetsu et al. to fit density profiles of phenomenological [Navarro– Frenk–White (NFW), Einasto, S´ersic, Stadel, Baltz–Marshall–Oguri (BMO) and Hernquist] and theoretical (non-singular Isothermal Sphere, DARKexp and Kang & He) models of the dark matter distribution. We account for large-scale structure effects, including a two-halo term in the analysis.We find that the BMO model provides the best fit to the data as measured by the reduced χ2. It is followed by the Stadel profile, the generalized NFW profile with a free inner slope and by the Einasto profile. The NFW model provides the best fit if we neglect the two-halo term, in agreement with results from Umetsu et al. Among the theoretical profiles, the DARKexp model with a single form parameter has the best performance, very close to that of the BMO profile. This may indicate a connection between this theoretical model and the phenomenology of dark matter haloes, shedding light on the dynamical basis of empirical profiles which emerge from numerical simulations.
Resumo:
In this work we aim to propose a new approach for preliminary epidemiological studies on Standardized Mortality Ratios (SMR) collected in many spatial regions. A preliminary study on SMRs aims to formulate hypotheses to be investigated via individual epidemiological studies that avoid bias carried on by aggregated analyses. Starting from collecting disease counts and calculating expected disease counts by means of reference population disease rates, in each area an SMR is derived as the MLE under the Poisson assumption on each observation. Such estimators have high standard errors in small areas, i.e. where the expected count is low either because of the low population underlying the area or the rarity of the disease under study. Disease mapping models and other techniques for screening disease rates among the map aiming to detect anomalies and possible high-risk areas have been proposed in literature according to the classic and the Bayesian paradigm. Our proposal is approaching this issue by a decision-oriented method, which focus on multiple testing control, without however leaving the preliminary study perspective that an analysis on SMR indicators is asked to. We implement the control of the FDR, a quantity largely used to address multiple comparisons problems in the eld of microarray data analysis but which is not usually employed in disease mapping. Controlling the FDR means providing an estimate of the FDR for a set of rejected null hypotheses. The small areas issue arises diculties in applying traditional methods for FDR estimation, that are usually based only on the p-values knowledge (Benjamini and Hochberg, 1995; Storey, 2003). Tests evaluated by a traditional p-value provide weak power in small areas, where the expected number of disease cases is small. Moreover tests cannot be assumed as independent when spatial correlation between SMRs is expected, neither they are identical distributed when population underlying the map is heterogeneous. The Bayesian paradigm oers a way to overcome the inappropriateness of p-values based methods. Another peculiarity of the present work is to propose a hierarchical full Bayesian model for FDR estimation in testing many null hypothesis of absence of risk.We will use concepts of Bayesian models for disease mapping, referring in particular to the Besag York and Mollié model (1991) often used in practice for its exible prior assumption on the risks distribution across regions. The borrowing of strength between prior and likelihood typical of a hierarchical Bayesian model takes the advantage of evaluating a singular test (i.e. a test in a singular area) by means of all observations in the map under study, rather than just by means of the singular observation. This allows to improve the power test in small areas and addressing more appropriately the spatial correlation issue that suggests that relative risks are closer in spatially contiguous regions. The proposed model aims to estimate the FDR by means of the MCMC estimated posterior probabilities b i's of the null hypothesis (absence of risk) for each area. An estimate of the expected FDR conditional on data (\FDR) can be calculated in any set of b i's relative to areas declared at high-risk (where thenull hypothesis is rejected) by averaging the b i's themselves. The\FDR can be used to provide an easy decision rule for selecting high-risk areas, i.e. selecting as many as possible areas such that the\FDR is non-lower than a prexed value; we call them\FDR based decision (or selection) rules. The sensitivity and specicity of such rule depend on the accuracy of the FDR estimate, the over-estimation of FDR causing a loss of power and the under-estimation of FDR producing a loss of specicity. Moreover, our model has the interesting feature of still being able to provide an estimate of relative risk values as in the Besag York and Mollié model (1991). A simulation study to evaluate the model performance in FDR estimation accuracy, sensitivity and specificity of the decision rule, and goodness of estimation of relative risks, was set up. We chose a real map from which we generated several spatial scenarios whose counts of disease vary according to the spatial correlation degree, the size areas, the number of areas where the null hypothesis is true and the risk level in the latter areas. In summarizing simulation results we will always consider the FDR estimation in sets constituted by all b i's selected lower than a threshold t. We will show graphs of the\FDR and the true FDR (known by simulation) plotted against a threshold t to assess the FDR estimation. Varying the threshold we can learn which FDR values can be accurately estimated by the practitioner willing to apply the model (by the closeness between\FDR and true FDR). By plotting the calculated sensitivity and specicity (both known by simulation) vs the\FDR we can check the sensitivity and specicity of the corresponding\FDR based decision rules. For investigating the over-smoothing level of relative risk estimates we will compare box-plots of such estimates in high-risk areas (known by simulation), obtained by both our model and the classic Besag York Mollié model. All the summary tools are worked out for all simulated scenarios (in total 54 scenarios). Results show that FDR is well estimated (in the worst case we get an overestimation, hence a conservative FDR control) in small areas, low risk levels and spatially correlated risks scenarios, that are our primary aims. In such scenarios we have good estimates of the FDR for all values less or equal than 0.10. The sensitivity of\FDR based decision rules is generally low but specicity is high. In such scenario the use of\FDR = 0:05 or\FDR = 0:10 based selection rule can be suggested. In cases where the number of true alternative hypotheses (number of true high-risk areas) is small, also FDR = 0:15 values are well estimated, and \FDR = 0:15 based decision rules gains power maintaining an high specicity. On the other hand, in non-small areas and non-small risk level scenarios the FDR is under-estimated unless for very small values of it (much lower than 0.05); this resulting in a loss of specicity of a\FDR = 0:05 based decision rule. In such scenario\FDR = 0:05 or, even worse,\FDR = 0:1 based decision rules cannot be suggested because the true FDR is actually much higher. As regards the relative risk estimation, our model achieves almost the same results of the classic Besag York Molliè model. For this reason, our model is interesting for its ability to perform both the estimation of relative risk values and the FDR control, except for non-small areas and large risk level scenarios. A case of study is nally presented to show how the method can be used in epidemiology.
Resumo:
The Székesfehérvár Ruin Garden is a unique assemblage of monuments belonging to the cultural heritage of Hungary due to its important role in the Middle Ages as the coronation and burial church of the Kings of the Hungarian Christian Kingdom. It has been nominated for “National Monument” and as a consequence, its protection in the present and future is required. Moreover, it was reconstructed and expanded several times throughout Hungarian history. By a quick overview of the current state of the monument, the presence of several lithotypes can be found among the remained building and decorative stones. Therefore, the research related to the materials is crucial not only for the conservation of that specific monument but also for other historic structures in Central Europe. The current research is divided in three main parts: i) description of lithologies and their provenance, ii) physical properties testing of historic material and iii) durability tests of analogous stones obtained from active quarries. The survey of the National Monument of Székesfehérvár, focuses on the historical importance and the architecture of the monument, the different construction periods, the identification of the different building stones and their distribution in the remaining parts of the monument and it also included provenance analyses. The second one was the in situ and laboratory testing of physical properties of historic material. As a final phase samples were taken from local quarries with similar physical and mineralogical characteristics to the ones used in the monument. The three studied lithologies are: fine oolitic limestone, a coarse oolitic limestone and a red compact limestone. These stones were used for rock mechanical and durability tests under laboratory conditions. The following techniques were used: a) in-situ: Schmidt Hammer Values, moisture content measurements, DRMS, mapping (construction ages, lithotypes, weathering forms) b) laboratory: petrographic analysis, XRD, determination of real density by means of helium pycnometer and bulk density by means of mercury pycnometer, pore size distribution by mercury intrusion porosimetry and by nitrogen adsorption, water absorption, determination of open porosity, DRMS, frost resistance, ultrasonic pulse velocity test, uniaxial compressive strength test and dynamic modulus of elasticity. The results show that initial uniaxial compressive strength is not necessarily a clear indicator of the stone durability. Bedding and other lithological heterogeneities can influence the strength and durability of individual specimens. In addition, long-term behaviour is influenced by exposure conditions, fabric and, especially, the pore size distribution of each sample. Therefore, a statistic evaluation of the results is highly recommended and they should be evaluated in combination with other investigations on internal structure and micro-scale heterogeneities of the material, such as petrographic observation, ultrasound pulse velocity and porosimetry. Laboratory tests used to estimate the durability of natural stone may give a good guidance to its short-term performance but they should not be taken as an ultimate indication of the long-term behaviour of the stone. The interdisciplinary study of the results confirms that stones in the monument show deterioration in terms of mineralogy, fabric and physical properties in comparison with quarried stones. Moreover stone-testing proves compatibility between quarried and historical stones. Good correlation is observed between the non-destructive-techniques and laboratory tests results which allow us to minimize sampling and assessing the condition of the materials. Concluding, this research can contribute to the diagnostic knowledge for further studies that are needed in order to evaluate the effect of recent and future protective measures.
Resumo:
The first aim of this thesis was to contribute to the understanding of how cultural capital (Bourdieu, 1983/1986) affects students achievements and performances. We specifically claimed that the effect of cultural capital is at least partly explained by the positioning students take towards the principles they use to attribute competence and intelligence. The testing of these hypothesis have been framed within the social representations theory, specifically in the formulation of the Lemanic school approach (Doise, 1986).
Resumo:
Il lavoro di questa tesi riguarda principalmente la progettazione, simulazione e test di laboratorio di tre versioni successive di schede VME, chiamate Read Out Driver (ROD), che sono state fabbricate per l'upgrade del 2014 dell'esperimento ATLAS Insertable B-Layer (IBL) al CERN. IBL è un nuovo layer che diverrà parte del Pixel Detector di ATLAS. Questa tesi si compone di una panoramica descrittiva dell'esperimento ATLAS in generale per poi concentrarsi sulla descrizione del layer specifico IBL. Inoltre tratta in dettaglio aspetti fisici e tecnici: specifiche di progetto, percorso realizzativo delle schede e test conseguenti. Le schede sono state dapprima prodotte in due prototipi per testare le prestazioni del sistema. Queste sono state fabbricate al fine di valutare le caratteristiche e prestazioni complessive del sistema di readout. Un secondo lotto di produzione, composto di cinque schede, è stato orientato alla correzione fine delle criticità emerse dai test del primo lotto. Un'indagine fine e approfondita del sistema ha messo a punto le schede per la fabbricazione di un terzo lotto di altre cinque schede. Attualmente la produzione è finita e complessivamente sono state realizzate 20 schede definitive che sono in fase di test. La produzione sarà validata prossimamente e le 20 schede verranno consegnate al CERN per essere inserite nel sistema di acquisizione dati del rivelatore. Al momento, il Dipartimento di Fisica ed Astronomia dell'Università di Bologna è coinvolto in un esperimento a pixel solamente attravers IBL descritto in questa tesi. In conclusione, il lavoro di tesi è stato prevalentemente focalizzato sui test delle schede e sul progetto del firmware necessario per la calibrazione e per la presa dati del rivelatore.
Resumo:
This thesis is divided in three chapters. In the first chapter we analyse the results of the world forecasting experiment run by the Collaboratory for the Study of Earthquake Predictability (CSEP). We take the opportunity of this experiment to contribute to the definition of a more robust and reliable statistical procedure to evaluate earthquake forecasting models. We first present the models and the target earthquakes to be forecast. Then we explain the consistency and comparison tests that are used in CSEP experiments to evaluate the performance of the models. Introducing a methodology to create ensemble forecasting models, we show that models, when properly combined, are almost always better performing that any single model. In the second chapter we discuss in depth one of the basic features of PSHA: the declustering of the seismicity rates. We first introduce the Cornell-McGuire method for PSHA and we present the different motivations that stand behind the need of declustering seismic catalogs. Using a theorem of the modern probability (Le Cam's theorem) we show that the declustering is not necessary to obtain a Poissonian behaviour of the exceedances that is usually considered fundamental to transform exceedance rates in exceedance probabilities in the PSHA framework. We present a method to correct PSHA for declustering, building a more realistic PSHA. In the last chapter we explore the methods that are commonly used to take into account the epistemic uncertainty in PSHA. The most widely used method is the logic tree that stands at the basis of the most advanced seismic hazard maps. We illustrate the probabilistic structure of the logic tree, and then we show that this structure is not adequate to describe the epistemic uncertainty. We then propose a new probabilistic framework based on the ensemble modelling that properly accounts for epistemic uncertainties in PSHA.
Resumo:
Flow features inside centrifugal compressor stages are very complicated to simulate with numerical tools due to the highly complex geometry and varying gas conditions all across the machine. For this reason, a big effort is currently being made to increase the fidelity of the numerical models during the design and validation phases. Computational Fluid Dynamics (CFD) plays an increasing role in the assessment of the performance prediction of centrifugal compressor stages. Historically, CFD was considered reliable for performance prediction on a qualitatively level, whereas tests were necessary to predict compressors performance on a quantitatively basis. In fact "standard" CFD with only the flow-path and blades included into the computational domain is known to be weak in capturing efficiency level and operating range accurately due to the under-estimation of losses and the lack of secondary flows modeling. This research project aims to fill the gap in accuracy between "standard" CFD and tests data by including a high fidelity reproduction of the gas domain and the use of advanced numerical models and tools introduced in the author's OEM in-house CFD code. In other words, this thesis describes a methodology by which virtual tests can be conducted on single stages and multistage centrifugal compressors in a similar fashion to a typical rig test that guarantee end users to operate machines with a confidence level not achievable before. Furthermore, the new "high fidelity" approach allowed understanding flow phenomena not fully captured before, increasing aerodynamicists capability and confidence in designing high efficiency and high reliable centrifugal compressor stages.
Resumo:
For the improvement of current neutron capture therapy, several liposomal formulations of neutron capture agent gadolinium were developed and tested in a glioma cell model. Formulations were analyzed regarding physicochemical and biological parameters, such as size, zeta potential, uptake into cancer cells and performance under neutron irradiation. The neutron and photon dose derived from intracellular as well as extracellular Gd was calculated via Monte Carlo simulations and set in correlation with the reduction of cell survival after irradiation. To investigate the suitability of Gd as a radiosensitizer for photon radiation, cells were also irradiated with synchrotron radiation in addition to clinically used photons generated by linear accelerator.rnIrradiation with neutrons led to significantly lower survival for Gd-liposome-treated F98 and LN229 cells, compared to irradiated control cells and cells treated with non-liposomal Gd-DTPA. Correlation between Gd-content and -dose and respective cell survival displayed proportional relationship for most of the applied formulations. Photon irradiation experiments showed the proof-of-principle for the radiosensitizer approach, although the photon spectra currently used have to be optimized for higher efficiency of the radiosensitizer. In conclusion, the newly developed Gd-liposomes show great potential for the improvement of radiation treatment options for highly malignant glioblastoma.rn
Resumo:
Sub-grid scale (SGS) models are required in order to model the influence of the unresolved small scales on the resolved scales in large-eddy simulations (LES), the flow at the smallest scales of turbulence. In the following work two SGS models are presented and deeply analyzed in terms of accuracy through several LESs with different spatial resolutions, i.e. grid spacings. The first part of this thesis focuses on the basic theory of turbulence, the governing equations of fluid dynamics and their adaptation to LES. Furthermore, two important SGS models are presented: one is the Dynamic eddy-viscosity model (DEVM), developed by \cite{germano1991dynamic}, while the other is the Explicit Algebraic SGS model (EASSM), by \cite{marstorp2009explicit}. In addition, some details about the implementation of the EASSM in a Pseudo-Spectral Navier-Stokes code \cite{chevalier2007simson} are presented. The performance of the two aforementioned models will be investigated in the following chapters, by means of LES of a channel flow, with friction Reynolds numbers $Re_\tau=590$ up to $Re_\tau=5200$, with relatively coarse resolutions. Data from each simulation will be compared to baseline DNS data. Results have shown that, in contrast to the DEVM, the EASSM has promising potentials for flow predictions at high friction Reynolds numbers: the higher the friction Reynolds number is the better the EASSM will behave and the worse the performances of the DEVM will be. The better performance of the EASSM is contributed to the ability to capture flow anisotropy at the small scales through a correct formulation for the SGS stresses. Moreover, a considerable reduction in the required computational resources can be achieved using the EASSM compared to DEVM. Therefore, the EASSM combines accuracy and computational efficiency, implying that it has a clear potential for industrial CFD usage.
Resumo:
Il lavoro di questa tesi riguarda principalmente l'upgrade, la simulazione e il test di schede VME chiamate ReadOut Driver (ROD), che sono parte della catena di elaborazione ed acquisizione dati di IBL (Insertable B-Layer). IBL è il nuovo componente del Pixel Detector dell'esperimento ATLAS al Cern che è stato inserito nel detector durante lo shut down di LHC; fino al 2012 infatti il Pixel Detector era costituito da tre layer, chiamati (partendo dal più interno): Barrel Layer 0, Layer 1 e Layer 2. Tuttavia, l'aumento di luminosità di LHC, l'invecchiamento dei pixel e la richiesta di avere misure sempre più precise, portarono alla necessità di migliorare il rivelatore. Così, a partire dall'inizio del 2013, IBL (che fino a quel momento era stato un progetto sviluppato e finanziato separatamente dal Pixel Detector) è diventato parte del Pixel Detector di ATLAS ed è stato installato tra la beam-pipe e il layer B0. Questa tesi fornirà innanzitutto una panoramica generale dell'esperimento ATLAS al CERN, includendo aspetti sia fisici sia tecnici, poi tratterà in dettaglio le varie parti del rivelatore, con particolare attenzione su Insertable B-Layer. Su quest'ultimo punto la tesi si focalizzerà sui motivi che ne hanno portato alla costruzione, sugli aspetti di design, sulle tecnologie utilizzate (volte a rendere nel miglior modo possibile compatibili IBL e il resto del Pixel Detector) e sulle scelte di sviluppo e fabbricazione. La tesi tratterà poi la catena di read-out dei dati, descrivendo le tecniche di interfacciamento con i chip di front-end, ed in particolare si concentrerà sul lavoro svolto per l'upgrade e lo sviluppo delle schede ReadOut Drivers (ROD) introducendo le migliorie da me apportate, volte a eliminare eventuali difetti, migliorare le prestazioni ed a predisporre il sistema ad una analisi prestazionale del rivelatore. Allo stato attuale le schede sono state prodotte e montate e sono già parte del sistema di acquisizione dati del Pixel Detector di ATLAS, ma il firmware è in continuo aggiornamento. Il mio lavoro si è principalmente focalizzato sul debugging e il miglioramento delle schede ROD; in particolare ho aggiunto due features: - programmazione parallela delle FPGA} delle ROD via VME. IBL richiede l'utilizzo di 15 schede ROD e programmandole tutte insieme (invece che una alla volta) porta ad un sensibile guadagno nei tempi di programmazione. Questo è utile soprattutto in fase di test; - reset del Phase-Locked Loop (PLL)} tramite VME. Il PLL è un chip presente nelle ROD che distribuisce il clock a tutte le componenti della scheda. Avere la possibilità di resettare questo chip da remoto permette di risolvere problemi di sincronizzazione. Le ReadOut Driver saranno inoltre utilizzate da più layer del Pixel Detector. Infatti oltre ad IBL anche i dati provenienti dai layer 1 e 2 dei sensori a pixel dell’esperimento ATLAS verranno acquisiti sfruttando la catena hardware progettata, realizzata e testata a Bologna.
Resumo:
Outside of relatively limited crash testing with large trucks, very little is known regarding the performance of traffic barriers subjected to real-world large truck impacts. The purpose of this study was to investigate real-world large truck impacts into traffic barriers to determine barrier crash involvement rates, the impact performance of barriers not specifically designed to redirect large trucks, and the real-world performance of large-truck-specific barriers. Data sources included the Fatality Analysis Reporting System (2000-2009), the General Estimates System (2000-2009) and 155 in-depth large truck-to-barrier crashes from the Large Truck Crash Causation Study. Large truck impacts with a longitudinal barrier were found to comprise 3 percent of all police-reported longitudinal barrier impacts and roughly the same proportion of barrier fatalities. Based on a logistic regression model predicting barrier penetration, large truck barrier penetration risk was found to increase by a factor of 6 for impacts with barriers designed primarily for passenger vehicles. Although large-truck-specific barriers were found to perform better than non-heavy vehicle specific barriers, the penetration rate of these barriers were found to be 17 percent. This penetration rate is especially a concern because the higher test level barriers are designed to protect other road users, not the occupants of the large truck. Surprisingly, barriers not specifically designed for large truck impacts were found to prevent large truck penetration approximately half of the time. This suggests that adding costlier higher test level barriers may not always be warranted, especially on roadways with lower truck volumes.
Resumo:
Primates as a taxonomic Order have the largest brains corrected for body size in the animal kingdom. These large brains have allowed primates to evolve the capacity to demonstrate advanced cognitive processes across a wide array of abilities. Nonhuman primates are particularly adept at social learning, defined as the modification of behavior by observing the actions of others. Additionally, primates often exploit resources differently depending on their social context. In this study, capuchin monkeys (Cebus apella) were tested on a cognitive task in three social contexts to determine if social context influenced their performance on the task. The three social contexts included: alone, having a dominant individual in an adjacent compartment, and having a subordinate individual in the adjacent compartment. The benefits to this design were thatthe social context was the only variable influencing performance, whereas in previous studies investigating audience effects other animals could physically and directly influence a subject's performance in an open testing situation. Based on past studies, Ipredicted that the presence of a dominant individual would reduce cognitive task performance compared to the other conditions. The cognitive test used was a match-tosample discrimination task in which animals matched combinations of eight geometric shapes. Animals were trained on this task in an isolated context until they reached a baseline level of proficiency and were then tested in the three social contexts in a random order multiple times. Two subjects (Mt and Dv) have successfully completed trials under all conditions. Results indicated that there were no significant difference in taskperformance across the three conditions (Dv x^2 (1) = 0.42, p=0.58; Mt x^2 (1) = 0.02, p=0.88). In all conditions, subjects performed significantly above chance (i.e., 39/60 trials determined by a binomial distribution). Results are contrary to previous studies thatreport low status monkeys 'play dumb' when testing in a mixed social context, possibly because other studies did not account for aggressive interference by dominants while testing. Results of this study suggest that the mere presence of a dominant individualdoes not necessarily affect performance on a cognitive task, but rather the imminence of physical aggression is the most important factor influencing testing in a social context.
Resumo:
Stress corrosion cracking susceptibility was investigated for an ultra-fine grained (UFG) AI-7.5Mg alloy and a conventional 5083 H111 alloy in natural seawater using slow strain rate testing (SSRT) at very slow strain rates between 1E(-5) s(-1), 1E(-6) s(-1) and 1E(-7) s(-1). The UFG Al-7.5Mg alloy was produced by cryomilling, while the 5083 H111 alloy is considered as a wrought manufactured product. The response of tensile properties to strain rate was analyzed and compared. Negative strain rate sensitivity was observed for both materials in terms of the elongation to failure. However, the UFG alloy displayed strain rate sensitivity in relation to strength while the conventional alloy was relatively strain rate insensitive. The mechanical behavior of the conventional 5083 alloy was attributed to dynamic strain aging (DSA) and delayed pit propagation while the performance of the UFG alloy was related to a diffusion-mediated stress relaxation mechanism that successfully delayed crack initiation events, counteracted by exfoliation and pitting which enhanced crack initiation. (C) 2014 Elsevier B.V. All rights reserved.