953 resultados para Probability of detection
Resumo:
Theoretical and empirical approaches have stressed the existence of financial constraints in innovative activities of firms. This paper analyses the role of financial obstacles on the likelihood of abandoning an innovation project. Although a large number of innovation projects are abandoned before their completion, the empirical evidence has focused on the determinants of innovation while failed projects have received little attention. Our analysis differentiates between internal and external barriers on the probability of abandoning a project and we examine whether the effects are different depending on the stage of the innovation process. In the empirical analysis carried out for a panel data of potential innovative Spanish firms for the period 2004-2010, we use a bivariate probit model to take into account the simultaneity of financial constraints and the decision to abandon an innovation project. Our results show that financial constraints most affect the probability of abandoning an innovation project during the concept stage and that low-technological manufacturing and non-KIS service sectors are more sensitive to financial constraints. Keywords: barriers to innovation, failure of innovation projects, financial constraints JEL Classifications: O31, D21
Resumo:
A generic LC-MS approach for the absolute quantification of undigested peptides in plasma at mid-picomolar levels is described. Nine human peptides namely, brain natriuretic peptide (BNP), substance P (SubP), parathyroid hormone 1-34 (PTH), C-peptide, orexines A and B (Orex-A and -B), oxytocin (Oxy), gonadoliberin-1 (gonadothropin releasing-hormone or luteinizing hormone-releasing hormone, LHRH) and α-melanotropin (α-MSH) were targeted. Plasma samples were extracted via a 2-step procedure: protein precipitation using 1vol of acetonitrile followed by ultrafiltration of supernatants on membranes with a MW cut-off of 30 kDa. By applying a specific LC-MS setup, large volumes of filtrates (e.g., 2×750 μL) were injected and the peptides were trapped on a 1mm i.d.×10 mm length C8 column using a 10× on-line dilution. Then, the peptides were back-flushed and a second on-line dilution (2×) was applied during the transfer step. The refocalized peptides were resolved on a 0.3mm i.d. C18 analytical column. Extraction recovery, matrix effect and limits of detection were evaluated. Our comprehensive protocol demonstrates a simple and efficient sample preparation procedure followed by the analysis of peptides with limits of detection in the mid-picomolar range. This generic approach can be applied for the determination of most therapeutic peptides and possibly for endogenous peptides with latest state-of-the-art instruments.
Resumo:
Background: The ultimate goal of synthetic biology is the conception and construction of genetic circuits that are reliable with respect to their designed function (e.g. oscillators, switches). This task remains still to be attained due to the inherent synergy of the biological building blocks and to an insufficient feedback between experiments and mathematical models. Nevertheless, the progress in these directions has been substantial. Results: It has been emphasized in the literature that the architecture of a genetic oscillator must include positive (activating) and negative (inhibiting) genetic interactions in order to yield robust oscillations. Our results point out that the oscillatory capacity is not only affected by the interaction polarity but by how it is implemented at promoter level. For a chosen oscillator architecture, we show by means of numerical simulations that the existence or lack of competition between activator and inhibitor at promoter level affects the probability of producing oscillations and also leaves characteristic fingerprints on the associated period/amplitude features. Conclusions: In comparison with non-competitive binding at promoters, competition drastically reduces the region of the parameters space characterized by oscillatory solutions. Moreover, while competition leads to pulse-like oscillations with long-tail distribution in period and amplitude for various parameters or noisy conditions, the non-competitive scenario shows a characteristic frequency and confined amplitude values. Our study also situates the competition mechanism in the context of existing genetic oscillators, with emphasis on the Atkinson oscillator.
Resumo:
Supported by IEEE 802.15.4 standardization activities, embedded networks have been gaining popularity in recent years. The focus of this paper is to quantify the behavior of key networking metrics of IEEE 802.15.4 beacon-enabled nodes under typical operating conditions, with the inclusion of packet retransmissions. We corrected and extended previous analyses by scrutinizing the assumptions on which the prevalent Markovian modeling is generally based. By means of a comparative study, we singled out which of the assumptions impact each of the performance metrics (throughput, delay, power consumption, collision probability, and packet-discard probability). In particular, we showed that - unlike what is usually assumed - the probability that a node senses the channel busy is not constant for all the stages of the backoff procedure and that these differences have a noticeable impact on backoff delay, packet-discard probability, and power consumption. Similarly, we showed that - again contrary to common assumption - the probability of obtaining transmission access to the channel depends on the number of nodes that is simultaneously sensing it. We evidenced that ignoring this dependence has a significant impact on the calculated values of throughput and collision probability. Circumventing these and other assumptions, we rigorously characterize, through a semianalytical approach, the key metrics in a beacon-enabled IEEE 802.15.4 system with retransmissions.
Resumo:
Exact closed-form expressions are obtained for the outage probability of maximal ratio combining in η-μ fadingchannels with antenna correlation and co-channel interference. The scenario considered in this work assumes the joint presence of background white Gaussian noise and independent Rayleigh-faded interferers with arbitrary powers. Outage probability results are obtained through an appropriate generalization of the moment-generating function of theη-μ fading distribution, for which new closed-form expressions are provided.
Resumo:
The anti-diuretic neurohypophysial hormone Vasopressin (Vp) and its synthetic analogue Desmopressin (Dp, 1-desamino-vasopressin) have received considerable attention from doping control authorities due to their impact on physiological blood parameters. Accordingly, the illicit use of Desmopressin in elite sport is sanctioned by the World Anti-Doping Agency (WADA) and the drug is classified as masking agent. Vp and Dp are small (8-9 amino acids) peptides administered orally as well as intranasally. Within the present study a method to determine Dp and Vp in urinary doping control samples by means of liquid chromatography coupled to quadrupole high resolution time-of-flight mass spectrometry was developed. After addition of Lys-Vasopressin as internal standard and efficient sample clean up with a mixed mode solid phase extraction (weak cation exchange), the samples were directly injected into the LC-MS system. The method was validated considering the parameters specificity, linearity, recovery (80-100%), accuracy, robustness, limit of detection/quantification (20/50 pg mL(-1)), precision (inter/intra-day<10%), ion suppression and stability. The analysis of administration study urine samples collected after a single intranasal or oral application of Dp yielded in detection windows for the unchanged target analyte for up to 20 h at concentrations between 50 and 600 pg mL(-1). Endogenous Vp was detected in concentrations of approximately 20-200 pg mL(-1) in spontaneous urine samples obtained from healthy volunteers. The general requirements of the developed method provide the characteristics for an easy transfer to other anti-doping laboratories and support closing another potential gap for cheating athletes.
Resumo:
Solid-phase extraction (SPE) in tandem with dispersive liquid-liquid microextraction (DLLME) has been developed for the determination of mononitrotoluenes (MNTs) in several aquatic samples using gas chromatography-flame ionization (GC-FID) detection system. In the hyphenated SPE-DLLME, initially MNTs were extracted from a large volume of aqueous samples (100 mL) into a 500-mg octadecyl silane (C(18) ) sorbent. After the elution of analytes from the sorbent with acetonitrile, the obtained solution was put under the DLLME procedure, so that the extra preconcentration factors could be achieved. The parameters influencing the extraction efficiency such as breakthrough volume, type and volume of the elution solvent (disperser solvent) and extracting solvent, as well as the salt addition, were studied and optimized. The calibration curves were linear in the range of 0.5-500 μg/L and the limit of detection for all analytes was found to be 0.2 μg/L. The relative standard deviations (for 0.75 μg/L of MNTs) without internal standard varied from 2.0 to 6.4% (n=5). The relative recoveries of the well, river and sea water samples, spiked at the concentration level of 0.75 μg/L of the analytes, were in the range of 85-118%.
Resumo:
The infinite slope method is widely used as the geotechnical component of geomorphic and landscape evolution models. Its assumption that shallow landslides are infinitely long (in a downslope direction) is usually considered valid for natural landslides on the basis that they are generally long relative to their depth. However, this is rarely justified, because the critical length/depth (L/H) ratio below which edge effects become important is unknown. We establish this critical L/H ratio by benchmarking infinite slope stability predictions against finite element predictions for a set of synthetic two-dimensional slopes, assuming that the difference between the predictions is due to error in the infinite slope method. We test the infinite slope method for six different L/H ratios to find the critical ratio at which its predictions fall within 5% of those from the finite element method. We repeat these tests for 5000 synthetic slopes with a range of failure plane depths, pore water pressures, friction angles, soil cohesions, soil unit weights and slope angles characteristic of natural slopes. We find that: (1) infinite slope stability predictions are consistently too conservative for small L/H ratios; (2) the predictions always converge to within 5% of the finite element benchmarks by a L/H ratio of 25 (i.e. the infinite slope assumption is reasonable for landslides 25 times longer than they are deep); but (3) they can converge at much lower ratios depending on slope properties, particularly for low cohesion soils. The implication for catchment scale stability models is that the infinite length assumption is reasonable if their grid resolution is coarse (e.g. >25?m). However, it may also be valid even at much finer grid resolutions (e.g. 1?m), because spatial organization in the predicted pore water pressure field reduces the probability of short landslides and minimizes the risk that predicted landslides will have L/H ratios less than 25. Copyright (c) 2012 John Wiley & Sons, Ltd.
Resumo:
Rockfall propagation areas can be determined using a simple geometric rule known as shadow angle or energy line method based on a simple Coulomb frictional model implemented in the CONEFALL computer program. Runout zones are estimated from a digital terrain model (DTM) and a grid file containing the cells representing rockfall potential source areas. The cells of the DTM that are lowest in altitude and located within a cone centered on a rockfall source cell belong to the potential propagation area associated with that grid cell. In addition, the CONEFALL method allows estimation of mean and maximum velocities and energies of blocks in the rockfall propagation areas. Previous studies indicate that the slope angle cone ranges from 27° to 37° depending on the assumptions made, i.e. slope morphology, probability of reaching a point, maximum run-out, field observations. Different solutions based on previous work and an example of an actual rockfall event are presented here.
Resumo:
Population viability analyses (PVA) are increasingly used in metapopulation conservation plans. Two major types of models are commonly used to assess vulnerability and to rank management options: population-based stochastic simulation models (PSM such as RAMAS or VORTEX) and stochastic patch occupancy models (SPOM). While the first set of models relies on explicit intrapatch dynamics and interpatch dispersal to predict population levels in space and time, the latter is based on spatially explicit metapopulation theory where the probability of patch occupation is predicted given the patch area and isolation (patch topology). We applied both approaches to a European tree frog (Hyla arborea) metapopulation in western Switzerland in order to evaluate the concordances of both models and their applications to conservation. Although some quantitative discrepancies appeared in terms of network occupancy and equilibrium population size, the two approaches were largely concordant regarding the ranking of patch values and sensitivities to parameters, which is encouraging given the differences in the underlying paradigms and input data.
Resumo:
Which projects should be financed through separate non-recourse loans (or limited- liability companies) and which should be bundled into a single loan? In the pres- ence of bankruptcy costs, this conglomeration decision trades off the benefit of co- insurance with the cost of risk contamination. This paper characterize this tradeoff for projects with binary returns, depending on the mean, variability, and skewness of returns, the bankruptcy recovery rate, the correlation across projects, the number of projects, and their heterogeneous characteristics. In some cases, separate financing dominates joint financing, even though it increases the interest rate or the probability of bankruptcy.
Resumo:
OBJECTIVE: To identify predictors of improved asthma control under conditions of everyday practice in Switzerland. RESEARCH DESIGN AND METHODS: A subgroup of 1380 patients with initially inadequately controlled asthma was defined from a cohort of 1893 asthmatic patients (mean age 45.3 + or - 19.2 years) recruited by 281 office-based physicians who participated in a previously-conducted asthma control survey in Switzerland. Multiple regression techniques were used to identify predictors of improved asthma control, defined as an absolute decrease of 0.5 points or more in the Asthma Control Questionnaire between the baseline (V1) and follow-up visit (V2). RESULTS: Asthma control between V1 and V2 improved in 85.7%. Add-on treatment with montelukast was reported in 82.9% of the patients. Patients with worse asthma control at V1 and patients with good self-reported adherence to therapy had significantly higher chances of improved asthma control (OR = 1.24 and 1.73, 95% CI 1.18-1.29 and 1.20-2.50, respectively). Compared to adding montelukast and continuing the same inhaled corticosteroid/fixed combination (ICS/FC) dose, the addition of montelukast to an increased ICS/FC dose yielded a 4 times higher chance of improved asthma control (OR = 3.84, 95% CI 1.58-9.29). Significantly, withholding montelukast halved the probability of achieving improved asthma control (OR = 0.51, 95% CI = 0.33-078). The probability of improved asthma control was almost 5 times lower among patients in whom FEV(1) was measured compared to those in whom it was not (OR = 0.23, 95% CI = 0.09-0.55). Patients with severe persistent asthma also had a significantly lower probability of improved control (OR = 0.15, 95% CI = 0.07-0.32), as did older patients (OR = 0.98, 95% CI = 0.97-0.99). Subgroup analyses which excluded patients whose asthma may have been misdiagnosed and might in reality have been chronic obstructive pulmonary disease (COPD) showed comparable results. CONCLUSIONS: Under conditions of everyday clinical practice, the addition of montelukast to ICS/FC and good adherence to therapy increased the likelihood of achieving better asthma control at the follow-up visit, while older age and more severe asthma significantly decreased it.
Resumo:
Ligands and receptors of the TNF superfamily are therapeutically relevant targets in a wide range of human diseases. This chapter describes assays based on ELISA, immunoprecipitation, FACS, and reporter cell lines to monitor interactions of tagged receptors and ligands in both soluble and membrane-bound forms using unified detection techniques. A reporter cell assay that is sensitive to ligand oligomerization can identify ligands with high probability of being active on endogenous receptors. Several assays are also suitable to measure the activity of agonist or antagonist antibodies, or to detect interactions with proteoglycans. Finally, self-interaction of membrane-bound receptors can be evidenced using a FRET-based assay. This panel of methods provides a large degree of flexibility to address questions related to the specificity, activation, or inhibition of TNF-TNF receptor interactions in independent assay systems, but does not substitute for further tests in physiologically relevant conditions.
Resumo:
Background: Imatinib has revolutionized the treatment of chronic myeloid leukemia (CML) and gastrointestinal stromal tumors (GIST). Considering the large inter-individual differences in the function of the systems involved in its disposition, exposure to imatinib can be expected to vary widely among patients. This observational study aimed at describing imatinib pharmacokinetic variability and its relationship with various biological covariates, especially plasma alpha1-acid glycoprotein (AGP), and at exploring the concentration-response relationship in patients. Methods: A population pharmacokinetic model (NONMEM) including 321 plasma samples from 59 patients was built up and used to derive individual post-hoc Bayesian estimates of drug exposure (AUC; area under curve). Associations between AUC and therapeutic response or tolerability were explored by ordered logistic regression. Influence of the target genotype (i.e. KIT mutation profile) on response was also assessed in GIST patients. Results: A one-compartment model with first-order absorption appropriately described the data, with an average oral clearance of 14.3 L/h (CL) and volume of distribution of 347 L (Vd). A large inter-individual variability remained unexplained, both on CL (36%) and Vd (63%), but AGP levels proved to have a marked impact on total imatinib disposition. Moreover, both total and free AUC correlated with the occurrence and number of side effects (e.g. OR 2.9±0.6 for a 2-fold free AUC increase; p<0.001). Furthermore, in GIST patients, higher free AUC predicted a higher probability of therapeutic response (OR 1.9±0.5; p<0.05), notably in patients with tumor harboring an exon 9 mutation or wild-type KIT, known to decrease tumor sensitivity towards imatinib. Conclusion: The large pharmacokinetic variability, associated to the pharmacokinetic-pharmacodynamic relationship uncovered are arguments to further investigate the usefulness of individualizing imatinib prescription based on TDM. For this type of drug, it should ideally take into consideration either circulating AGP concentrations or free drug levels, as well as KIT genotype for GIST.
Resumo:
Background: The objective of this study was to determine if mental health and substance use diagnoses were equally detected in frequent users (FUs) compared to infrequent users (IUs) of emergency departments (EDs). Methods: In a sample of 399 adult patients (>= 18 years old) admitted to a teaching hospital ED, we compared the mental health and substance use disorders diagnoses established clinically and consigned in the medical files by the ED physicians to data obtained in face-to-face research interviews using the Primary Care Evaluation of Mental Disorders (PRIME-MD) and the Alcohol, Smoking and Involvement Screening Test (ASSIST). Between November 2009 and June 2010, 226 FUs (>4 visits within a year) who attended the ED were included, and 173 IUs (<= 4 visits within a year) were randomly selected from a pool of identified patients to comprise the comparison group. Results: For mental health disorders identified by the PRIME-MD, FUs were more likely than IUs to have an anxiety (34 vs. 16%, Chi2(1) = 16.74, p <0.001), depressive (47 vs. 25%, Chi2(1) = 19.11, p <0.001) or posttraumatic stress (PTSD) disorder (11 vs. 5%, Chi2(1) = 4.87, p = 0.027). Only 3/76 FUs (4%) with an anxiety disorder, 16/104 FUs (15%) with a depressive disorder and none of the 24 FUs with PTSD were detected by the ED medical staff. None of the 27 IUs with an anxiety disorder, 6/43 IUs (14%) with a depressive disorder and none of the 8 IUs with PTSD were detected. For substance use disorders identified by the ASSIST, FUs were more at risk than IUs for alcohol (24 vs. 7%, Chi2(1) = 21.12, p <0.001) and drug abuse/dependence (36 vs. 25%, Chi2(1) = 5.52, p = 0.019). Of the FUs, 14/54 (26%) using alcohol and 8/81 (10%) using drugs were detected by the ED physicians. Of the IUs, 5/12 (41%) using alcohol and none of the 43 using drugs were detected. Overall, there was no significant difference in the rate of detection of mental health and substance use disorders between FUs and IUs (Fisher's Exact Test: anxiety, p = 0.567; depression, p = 1.000; PTSD, p = 1.000; alcohol, p = 0.517; and drugs, p = 0.053). Conclusions: While the prevalence of mental health and substance use disorders was higher among FUs, the rates of detection were not significantly different for FUs vs. IUs. However, it may be that drug disorders among FUs were more likely to be detected.